perm filename AI.V1[BB,DOC] blob sn#737489 filedate 1984-01-03 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00118 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00016 00002	∂14-May-83  1726	LAWS%SRI-AI.ARPA@SCORE 	AIList Digest   V1 #1
C00036 00003	∂14-May-83  1726	LAWS%SRI-AI.ARPA@SCORE 	AIList Digest   V1 #2
C00050 00004	∂14-May-83  1727	LAWS%SRI-AI.ARPA@SCORE 	AIList Digest   V1 #3
C00059 00005	∂16-May-83  0058	LAWS%SRI-AI.ARPA@SCORE 	AIList Digest   V1 #4
C00083 00006	∂18-May-83  1313	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #5  
C00095 00007	∂22-May-83  0145	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #6  
C00107 00008	∂22-May-83  1319	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #7  
C00128 00009	∂22-May-83  1248	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #8  
C00148 00010	∂29-May-83  0046	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #9  
C00163 00011	∂03-Jun-83  1832	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #10
C00174 00012	∂03-Jun-83  1853	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #11
C00181 00013	∂07-Jun-83  1708	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #12
C00194 00014	∂08-Jun-83  1339	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #13
C00204 00015	∂11-Jun-83  2255	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #14
C00214 00016	∂15-Jun-83  0011	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #15
C00235 00017	∂16-Jun-83  1922	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #16
C00252 00018	∂26-Jun-83  1707	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #17
C00269 00019	∂26-Jun-83  1751	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #18
C00285 00020	∂03-Jul-83  1810	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #19
C00303 00021	∂06-Jul-83  1833	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #20
C00317 00022	∂11-Jul-83  0352	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #21
C00339 00023	∂18-Jul-83  1950	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #22
C00356 00024	∂21-Jul-83  1918	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #23
C00377 00025	∂21-Jul-83  1819	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #24
C00393 00026	∂21-Jul-83  1640	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #25
C00423 00027	∂25-Jul-83  2359	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #26
C00450 00028	∂28-Jul-83  0912	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #27
C00473 00029	∂29-Jul-83  1004	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #28
C00490 00030	∂29-Jul-83  1911	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #29
C00502 00031	∂02-Aug-83  1514	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #30
C00521 00032	∂02-Aug-83  2352	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #31
C00537 00033	∂04-Aug-83  1211	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #32
C00565 00034	∂05-Aug-83  2115	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #33
C00586 00035	∂08-Aug-83  1500	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #34
C00606 00036	∂09-Aug-83  1920	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #35
C00629 00037	∂09-Aug-83  2027	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #36
C00649 00038	∂09-Aug-83  2149	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #37
C00678 00039	∂09-Aug-83  2330	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #38
C00706 00040	∂16-Aug-83  1113	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #39
C00731 00041	∂16-Aug-83  1333	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #40
C00752 00042	∂17-Aug-83  1713	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #41
C00771 00043	∂18-Aug-83  1135	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #42
C00782 00044	∂19-Aug-83  1927	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #43
C00811 00045	∂22-Aug-83  1145	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #44
C00845 00046	∂22-Aug-83  1347	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #45
C00867 00047	∂23-Aug-83  1228	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #46
C00888 00048	∂24-Aug-83  1206	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #47
C00916 00049	∂25-Aug-83  1057	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #48
C00935 00050	∂29-Aug-83  1311	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #49
C00959 00051	∂30-Aug-83  1143	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #50
C00977 00052	∂30-Aug-83  1825	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #51
C01005 00053	∂31-Aug-83  1538	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #52
C01038 00054	∂02-Sep-83  1043	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #53
C01060 00055	∂09-Sep-83  1317	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #54
C01090 00056	∂09-Sep-83  1628	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #55
C01112 00057	∂09-Sep-83  1728	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #56
C01130 00058	∂15-Sep-83  2007	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #57
C01156 00059	∂16-Sep-83  1714	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #58
C01180 00060	∂19-Sep-83  1751	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #59
C01206 00061	∂20-Sep-83  1121	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #60
C01228 00062	∂22-Sep-83  1847	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #61
C01256 00063	∂25-Sep-83  1736	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #62
C01283 00064	∂25-Sep-83  2055	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #63
C01307 00065	∂26-Sep-83  2348	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #64
C01326 00066	∂29-Sep-83  1120	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #65
C01350 00067	∂29-Sep-83  1438	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #66
C01375 00068	∂29-Sep-83  1610	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #67
C01399 00069	∂03-Oct-83  1104	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #68
C01424 00070	∂03-Oct-83  1255	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #69
C01453 00071	∂03-Oct-83  1907	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #70
C01484 00072	∂06-Oct-83  1525	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #71
C01507 00073	∂10-Oct-83  1623	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #72
C01529 00074	∂10-Oct-83  2157	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #73
C01557 00075	∂11-Oct-83  1950	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #74
C01581 00076	∂12-Oct-83  1827	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #75
C01608 00077	∂13-Oct-83  1804	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #76
C01630 00078	∂14-Oct-83  1545	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #77
C01657 00079	∂14-Oct-83  2049	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #78
C01684 00080	∂17-Oct-83  0120	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #79
C01705 00081	∂20-Oct-83  1541	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #80
C01736 00082	∂24-Oct-83  1255	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #81
C01756 00083	∂26-Oct-83  1614	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #82
C01784 00084	∂27-Oct-83  1859	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #83
C01802 00085	∂28-Oct-83  1402	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #84
C01824 00086	∂31-Oct-83  1445	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #85
C01854 00087	∂31-Oct-83  1951	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #86
C01882 00088	∂01-Nov-83  1649	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #87
C01912 00089	∂03-Nov-83  1710	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #88
C01931 00090	∂04-Nov-83  0029	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #89
C01955 00091	∂05-Nov-83  0107	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #90
C01974 00092	∂07-Nov-83  0920	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #91
C02002 00093	∂07-Nov-83  1507	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #92
C02023 00094	∂07-Nov-83  2011	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #93
C02047 00095	∂10-Nov-83  0230	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #94
C02072 00096	∂09-Nov-83  2344	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #95
C02100 00097	∂14-Nov-83  1831	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #96
C02117 00098	∂14-Nov-83  1702	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #97
C02142 00099	∂15-Nov-83  1838	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #98
C02166 00100	∂16-Nov-83  1906	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #99
C02196 00101	∂20-Nov-83  1722	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #100    
C02216 00102	∂20-Nov-83  2100	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #101    
C02241 00103	∂22-Nov-83  1724	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #102    
C02271 00104	∂27-Nov-83  2131	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #103    
C02295 00105	∂28-Nov-83  1357	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #104    
C02318 00106	∂29-Nov-83  0155	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #105    
C02341 00107	∂29-Nov-83  1837	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #106    
C02373 00108	∂02-Dec-83  0153	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #107    
C02397 00109	∂02-Dec-83  2044	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #108    
C02427 00110	∂05-Dec-83  0250	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #109    
C02448 00111	∂07-Dec-83  0058	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #110    
C02481 00112	∂10-Dec-83  1902	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #111    
C02507 00113	∂14-Dec-83  1459	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #112    
C02536 00114	∂16-Dec-83  1327	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #113    
C02560 00115	∂18-Dec-83  1526	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #114    
C02588 00116	∂21-Dec-83  0613	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #115    
C02608 00117	∂22-Dec-83  2213	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #116    
C02625 00118	∂30-Dec-83  0322	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #117    
C02652 ENDMK
C⊗;
∂14-May-83  1726	LAWS%SRI-AI.ARPA@SCORE 	AIList Digest   V1 #1
Received: from SU-SCORE by SU-AI with PUP; 14-May-83 17:26 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Sat 14 May 83 17:29:57-PDT
Date: Sat 14 May 83 17:16:18-PDT
From: Laws@SRI-AI <AIList-Request@SRI-AI.ARPA>
Subject: AIList Digest   V1 #1
To: Local-AI-BBoard%SAIL@SU-SCORE.ARPA


AIList Digest            Tuesday, 26 Apr 1983       Volume 1 : Issue 1

Today's Topics:
  Welcome
  Charter Membership
  Request for Report Abstracts
  Statistics on IJCAI-83 Papers
  Standardized Correspondence
----------------------------------------------------------------------

Date: Mon 25 Apr 83 14:51:42-PDT
From: Ken Laws <Laws@SRI-AI>
Subject: Welcome


Welcome to the AIList.

I am the moderator of the AIList discussion.  I am responsible
for composing the digest from pending submissions, controlling
the daily volume of mail, keeping an archive, and answering
administrative requests.

You may submit mail for the digest by addressing it to AIList@SRI-AI.
Administrative requests should be sent to AIList-Request@SRI-AI.
An archival copy of all list remailings will be kept; feel free to
ask AIList-Request for back issues until a formal archive system is
instituted.

AIList is open to discussion of any topic related to artificial
intelligence.  My own interests are primarily in

  Expert Systems                        AI Applications
  Knowledge Representation              Knowledge Acquisition
  Problem Solving                       Hierarchical Inference
  Machine Learning                      Pattern Recognition
  AI Techniques                         Data Analysis Techniques

Contributions concerning

  Cognitive Psychology                  Human Perception
  Vision Analysis                       Speech Analysis
  Language Understanding                Natural Languages
  AI Languages                          AI Environments
  Automatic Programming                 AI Systems Support
  Theorem Proving                       Logic Programming
  Robotics                              Automated Design
  Planning and Search                   Cybernetics
  Game Theory                           Computer Science
  Data Abstraction                      Library Science
  Statistical Techniques                Information Theory
  AI Hardware                           Information Display

and related topics are also welcome.  Contributions may be anything
from tutorials to rampant speculation.  In particular, the following
are sought.

  Abstracts                             Reviews
  Lab Descriptions                      Research Overviews
  Work Planned or in Progress           Half-Baked Ideas
  Conference Announcements              Conference Reports
  Bibliographies                        History of AI
  Puzzles and Unsolved Problems         Anecdotes, Jokes, and Poems
  Queries and Requests                  Address Changes (Bindings)

The only real boundaries to the discussion are defined by the topics
of other mailing lists.  Robotic mythology, for instance, might be
more appropriate for SF-LOVERS.  Logic programming and theorem proving
are also covered by the PROLOG list.

I suggest that you "sign" submissions longer than a paragraph so that
readers don't have to scroll backwards to see the FROM line.  Editing
of contributions will usually be limited to text justifications and
spelling corrections.  Editorial remarks and elisions will be marked
with square brackets.  The author will be contacted if significant
editing is required.

I have no objection to distributing material that is destined for
conference proceedings or any other publication.  You may want to
send copies of your submissions to SIGART @USC-ECLC or to the AI
Magazine (currently Engelmore @SUMEX-Aim) for hardcopy publication.
List items should be considered unrefereed working papers, and
opinions to be those of the author and not of any organization.
Copies of list items should credit the original author, not
necessarily to the AIList.

The list does not assume copyright, nor does it accept any liability
arising from remailing of submitted material.  I reserve the right,
however, to refuse to remail any contribution that I judge to be
obscene, libelous, irrelevant, or pointless.

Names and net addresses of list members are in the public domain.
Your name will be made available (for noncommercial purposes) unless
special arrangements are made.

Replies to public requests for information should be sent, at least
in "carbon" form, to this list unless the request states otherwise.
If necessary, I will digest or abstract the replies to control the
volume of distributed mail.

Please contribute freely.  I would rather deal with too much material
than with too little.

                                        -- Ken Laws

------------------------------

Date: Mon 25 Apr 83 09:34:04-PDT
From: AIList <AIList-Request@SRI-AI.ARPA>
Subject: Charter Membership


The AIList is off to a good start.  We have approximately 168
subscribers, plus an unknown number through remailing or BBoard
services at

    AI-INFO@CIT-20              DSN-AI@SU-DSN (*)
    AIList@BRL                  AI-BBD.UMCP-CS@UDel-Relay (*)
    AIList@Cornell              BBOARD.AIList@UTEXAS-20 (*)
    bbAI-List@MIT-XX            G.TI.DAK@UTEXAS-20
    AI-BBOARD@SRI-AI            AI-LOCAL@YALE
    Incoming-AIList@SUMEX       AI@RADC-TOPS20
    AIList-Distribution@MIT-EE  AIList-BBOARD@RUTGERS
    Spaf.GATech@UDel-Relay

(Maintainers of the starred BBoards have specifically requested
that local subscribers drop their individual memberships.)


The "charter membership" is distributed as follows:

AIDS-UNIX(2), BBNA, BBNG, BBN-UNIX, BRL(bb), BRL-VLD, CIT-20(bb),
CORNELL(1+bb), CMU-CS-A(12), CMU-CS-C(2), CMU-CS-G, CMU-CS-IUS,
CMU-CS-SPICE, CMU-RI-FAS(2), CMU-RI-ISL, DSN-AI@SU-DSN(1+bb),
GATech@UDel←Relay(bb), KESTREL, MIT-DSPG(2), MIT-EE(bb), MIT-MC(10),
MIT-EECS@MIT-MC, MIT-OZ@MIT-MC(17), MIT-ML(3), MIT-OZ@MIT-ML,
MIT-SPEECH, MIT-XX(5+bb), OFFICE-3, PARC-MAXC(8),
RADC-TOPS-20(bb),RUTGERS(6+bb), S1-C, SRI-AI(5+bb), SRI-CSL,
SRI-KL(2), SRI-TSC, SU-AI@USC-ECL(10), SUMEX(1+bb), SUMEX-AIM(7),
SU-SCORE(11), UCI-20A@Rand-Relay, UCLA-SECURITY, UMASS-CS@UDel-Relay,
UMCP-CS@UDel-Relay(bb), USC-ECL(3), USC-ECLB(2), USC-ECLC,
USC-ISI(2), USC-ISIB(5), USC-ISID, USC-ISIE, USC-ISIF(4), UTAH-20(7),
UTEXAS-20(6+bb), WASHINGTON(5), YALE(3+bb)

                                        -- Ken Laws

------------------------------

Date: 22 Apr 1983 0227-EST
From: TYG%MIT-OZ@MIT-MC
Subject: addition and woe

Please add me to the list.  Sigh.  I came up with the idea of a list
to disseminate abstracts and ordering info for AI papers last Dec.,
but held off due to the Arpanet changeover.  I then got busy with
other things, and planned to get it going in a few weeks.  Sigh.

Anyway, I may as well share my ideas for the list.  I think all sites
doing AI should be asked to submit the following info about papers as
they come out:  Title, Author, Length, Type (Master's thesis, Ph.D.
thesis, Tech report, Journal article, etc.), Abstract, Cost, and
ordering information.  Presumably the person at each site in charge of
publications would enter this.

Good Luck Tom "Next time I won't procrastinate" Galloway

[I would welcome such input.  The "person in charge" need not be a
member of this list.  I suggest that administrative personnel send
such information both to AIList and to SIGART@USC-ECLC.  Ordering 
information for AIList could be abbreviated to a net address if the 
sender is willing to respond to queries.  -- KIL]

------------------------------

Date: Thursday, 21-Apr-83  15:23:45-BST
From: BUNDY    HPS (on ERCC DEC-10)  <bundy@edxa>
Subject: Statistics on IJCAI-83 Papers

[I don't think Alan Bundy will mind my passing along these
statistics.  I have edited the table slightly to fit the 70-column    
capacity of the digesting software made available by Mel Pleasant,
the Human-Nets moderator.  The digester was developed by James
McGrath at SCORE. -- KIL]


                PAPER STATISTICS - IJCAI-83

                        Submitted       Accepted        Moved
Subfield                Long    Short   Long    Short   L -> S

Miscellaneous           -       3       -       1
Automatic Prog.         8       11      1       7       4
Cognitive Modelling     9       32      2       12      3
Expert Systems          31      56      8       31      9
Knowledge Repn.         28      40      7       24      6
Learning & Know. Acq.   14      35      1       22      5
Logic Prog.             14      17      4       9       4
Natural Language        23      74      2       39      7
Planning & Search       11      20      3       11      5
Robotics                11      8       5       7       2
System Support          4       9       -       5       2
Theorem Proving         7       16      5       8       -
Vision                  32      38      10      31      14

        TOTAL           192     359     48      207     61



        COMPARISON WITH PREVIOUS IJCAI CONFERENCES

                                LONG    SHORT   TOTAL

IJCAI-79 Submitted Total        unk     unk     428
IJCAI-79 Accepted Total         83      145     228
IJCAI-79 Acceptance Rate                        53%

IJCAI-81 Submitted Total        unk     unk     576
IJCAI-81 Accepted Total         127     74      201
IJCAI-81 Acceptance Rate                        35%


IJCAI-83 Submitted Total        192     359     551
IJCAI-83 Accepted Total         48      207     255
IJCAI-83 Acceptance Rate                        46%



                REMARKS

        You will see that I succeeded in my aim of shifting the burden
of papers from the long to the short categories.  This enabled us to
apply high standards to the long papers without decreasing the overall
acceptance rate.

                        Alan

------------------------------

Date: Sun 24 Apr 83 20:41:46-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Standardized Correspondence

[This is arguably more appropriate for Human-Nets, but I want to 
illustrate the level of reporting and/or discussion that I consider 
appropriate for this list.  -- KIL]


The May issue of High Technology describes the Prentice-Hall Letter
Pac system from Dictronics Publishing.  It is a semiautomatic business
letter generator that customizes prototypical letters by substituting 
synonyms categorized into four levels of formality (e.g., ask,
request, demand).  The user need only insert a few particulars before
sending the letter out.

The article also suggests automatic letter reading (i.e., parsing).  
There is already a system that compresses text by discarding all but 
the first sentence of each paragraph.  More sophisticated text 
condensation and text understanding systems are being developed.  A
short-cut is available, however.

If everyone used Letter Pac or an equivalent, parsing the text would 
be a simple matter of extracting the original generating parameters:  
(dunning-letter-7 formality-level-3 car-payment-overdue $127.38).  The
"Dear Sir" form of the letter would then exist only for transmission 
between computers.

If this became common, could elimination of the text form be long in
coming?  I believe that John McCarthy has been working on ideas along
this line.  Most transactions could be handled directly by computers
using standardized transaction formats.  When transmission of English
text is necessary, it might make sense to send preparsed sentences
instead of having one computer synthesize a message and a second one
parse it.  All that is needed is to have identical synthesis and
parsing software available to both machines for those rare occasions
when a human wants to enter the loop.

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************
-------

∂14-May-83  1726	LAWS%SRI-AI.ARPA@SCORE 	AIList Digest   V1 #2
Received: from SU-SCORE by SU-AI with PUP; 14-May-83 17:26 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Sat 14 May 83 17:30:13-PDT
Date: Sat 14 May 83 17:17:26-PDT
From: Laws@SRI-AI <AIList-Request@SRI-AI.ARPA>
Subject: AIList Digest   V1 #2
To: Local-AI-BBoard%SAIL@SU-SCORE.ARPA

US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467


AIList Digest             Sunday, 1 May 1983        Volume 1 : Issue 2

Today's Topics:
  New BBoards
  The T Programming Language
  Parallel Nonnumeric Algorithms
  Pattern Recognition
  Standardized Correspondence
  Alternate Distribution of AIlist
  Facetia
----------------------------------------------------------------------

Date: Sat 30 Apr 83 17:17:00-PDT
From: AIList <AIList-Request@SRI-AI.ARPA>
Subject: New BBoards


The following new BBoards and remailing lists have been created:

    AIList-BBOARD@RUTGERS
    NYU-AIList@NYU
    "XeroxAIList↑.PA"@PARC-MAXC
    UCI-AIList.UCI@Rand-Relay

I am told that the PARC list has 94 members.  As yet there is no 
BBoard at CMU (36 members); someone might want to establish one.  I 
will publish an updated list of hosts after the membership settles 
down.

                                        -- Ken Laws

------------------------------

Date: Tue, 26 Apr 83 18:26:42 EDT
From: John O'Donnell <Odonnell@YALE.ARPA>
Subject: The T Programming Language

I am pleased to announce the availability of our implementation of the
T programming language for the VAX under the Unix (4.xBSD) and VMS
(3.x) operating systems and for the Apollo Domain workstation.

T is a new dialect of Lisp comparable in power to other recent
dialects such as Lisp Machine Lisp and Common Lisp, but fundamentally
more similar in spirit to Scheme than to traditional Lisps.

The current system, version 2, is in production use at Yale and
elsewhere, in AI and systems research and in education.  A number of
large programs have been built in T, and the implementation is
acceptably stable and robust.  Yale and Harvard successfully taught
undergraduate courses this semester in T (Harvard used Sussman and
Abelson's 6.001 course).  Much work remains to be done; we are
currently expanding the programming environment and improving
performance.  Our next release is planned for sometime this fall.

Please contact me directly if you're interested in getting the
distribution.

                          John O'Donnell
                          Department of Computer Science
                          Box 2158 Yale Station
                          New Haven CT 06520
                          (203) 432-4666
                          ODonnell@Yale
                          ...decvax!yale-comix!odonnell

------------------------------

Date: Thu 28 Apr 83 14:40:26-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: parallel non-numeric algorithms

        Part of my Ph.D. work has been in parallel processing
algorithms in graph theory (unfortunately, no hardware is currently
available for an implementation, but that only makes the excursion a
little less satisfying). Specifically, I have been simulating the
performance of an algorithm for the utilization of parallel processing
in speeding up the common subgraph search problem. Commonly, this
problem involves finding all sufficiently large subgraphs common to
two given graphs.  No efficient algorithm exists for doing this
search.

        I know that several AI groups are working on parallel
processing in AI, but have not found any discussion involving graph
searching techniques. The bias in parallel processing has been toward 
numerical algorithms and the use of array processors; I figured that
there MUST be some AI group working at parallel processing in a
non-numerical field such as graph searching.  I would like to hear
from anyone who knows of such or similar work.

        By the way, I had heard that workers had had 'problems' with
the parallel LISP machines, but have not been able to pin anyone down
exactly as to the nature or extent of these problems.  Anyone know
exactly what was discovered in those researches?

Thanks--

David Rogers DROGERS@SUMEX-AIM.ARPA

------------------------------

Date: Fri 29 Apr 83 08:35:25-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Pattern Recognition


PR People should take note of "Candide's Practical Principles of
Experimental Pattern Recognition", by George Nagy, in the March issue
of IEEE PAMI.  I particularly like

    ... any feature may be presumed to be normally
    distributed if its mean and variance can be
    estimated from its empirically observed distribution.

and

    ... adapting the classifier to the test set is
    superior to adaptation on the training set.

                                -- Ken Laws

------------------------------

Date: 30 April 1983 04:00 EDT
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Standardized Correspondence

Rather than distributing the same software to every site, it would
make more sense to develop a machine-to-machine language which would
express (dunning-letter-7 formality-level-3 car-payment-overdue
$127.38) in an easily parseable form.  English is complex, redundant,
and vague.  Is there any reason why we can't design a language which
is simple, efficient, and precise?  It would be awful for
(intelligent) people, but great for (stupid) machines.

-- Steve

[If the parsing and synthesis functions were common, the software
might be compiled into hardware; if rare, it might be accessed
remotely over a network.  I doubt that software storage requirements
will be a problem for long.

There have been attempts at developing simpler natural languages.
One idea is to structure the language so that any idea can only
be expressed in one canonical form (DuckSpeak, Basic English,
controlled-vocabulary English as taught in our grade schools).
The other idea is to allow any semantic term to fill any syntactic
slot (sign language, Loglan).

Langauges of the first type present difficulties because of the
overloading of words (e.g., "get" in English).  This can be avoided
in limited domains such as repair manuals, but for general usage
something like Roger Shank's canonical forms would be needed.

I don't know what computational difficulties are presented by
languages of the second type.  If the Whorfian hypothesis is correct,
more ideas can be "thought", which may lead to greater complexity.
On the other hand, the algorithm needn't keep track of awkward or
stereotyped methods of expressing basically simple concepts.  ("I
greened my house", or what is the past tense of "beware"?)

I trust that computational difficulties can be overcome.  The
greatest problem in achieving user acceptance of parsed transmissions
may be that resynthesis will generate a paraphrase, or corrected
version, of the original.  Humans tend to be sentimental about their
own syntactic constructs, even down to where the lines are broken.

					-- KIL ]

------------------------------

Date: Thu 28 Apr 83 00:52:52-PDT
From: Dan Dolata <DOLATA@SUMEX-AIM.ARPA>
Subject: Alternate distribution of AIlist


I am moving to Sweden soon, and while I will be able to touch back to 
my home base via international-net occasionally, the long ditance 
rates make it prohibitive to try to read any large number of lines 
each day.  I was wondering if it might be possible to set up some ort
of system where AIlist could be spooled onto small tapes or floppies
monthly, and mailed to people who are away from the net?  Of course, I
would be happy to pay for mailing costs, and would be happy to buy the
person who did the grunt work a nice meal when I got back from Europe
(or in Karlsruhe during IJCAI).

Of course, if it became neccesary to charge $ because you had to hire 
a person to mount the media, I would be happy to subscribe!

Thanks for your time
        Dan [dolata@sumex]

[I'm afraid that I haven't the resources to oblige.  I suggest that
printed copies be sent, providing that doesn't violate any technology
export laws.  Dan would like to know if others are interested in
getting or providing machine-readable copies.  -- KIL]

------------------------------

Date: Fri 29 Apr 83 09:02:22-PDT
From: AIList <AIList-Request@SRI-AI>
Subject: Facetia


I hope everyone kept V1 #1.  Someday it may be as valuable as the
first edition of Superman comics.

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************
-------

∂14-May-83  1727	LAWS%SRI-AI.ARPA@SCORE 	AIList Digest   V1 #3
Received: from SU-SCORE by SU-AI with PUP; 14-May-83 17:27 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Sat 14 May 83 17:30:27-PDT
Date: Sat 14 May 83 17:18:29-PDT
From: Laws@SRI-AI <AIList-Request@SRI-AI.ARPA>
Subject: AIList Digest   V1 #3
To: Local-AI-BBoard%SAIL@SU-SCORE.ARPA

Date: Sunday, May 8, 1983 11:12PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #3
To: AIList@SRI-AI


AIList Digest             Monday, 9 May 1983        Volume 1 : Issue 3

Today's Topics:
  Administrivia
  Re: the Whorfian hypothesis
  Re: Artificial Languages
  Putting programmers out of a Job?
----------------------------------------------------------------------

Date: Sun 8 May 83 23:05:43-PDT
From: Laws@SRI-AI <AIList-Request@SRI-AI.ARPA>
Subject: Administrivia


We have been joined by new BBoards or remailing nodes at

    AI@NLM-MCS
    AIList-Usenet@SRI-UNIX
    Post-AIList.UNC@UDel-Relay

The Usenet connection is a two-way link with the net.ai discussion
group.  More on this later.

I have been responding to additions by sending out back issues.  
Henceforth I will only send a welcome message and statement of policy.
Back issues are available by request.

I have tried to establish contact with all who have asked to be
enrolled, but several sites have been unreachable for the last two
weeks.  I cannot guarantee delivery of every issue to every site, and
may cut short the usual two-week retry period in order to reduce the
system load.

                                        -- Ken Laws

------------------------------

Date: 2 May 1983 1038-EDT (Monday)
From: Robert.Frederking@CMU-CS-A (C410RF60)
Subject: Re: the Whorfian hypothesis

        I just thought I should point out that the Whorfian hypothesis
is one of those things which was rejected a long time ago in its
original field (at least in its strong form), but has remained
interesting and widely talked about in other fields.  At the time
Whorf hypothesized that language constrains the way people think, the
views of language and culture were that language was a highly
systematic, constrained thing, whereas culture was just an arbitrary
collection of facts.  By the time Whorf was getting really popular in
other circles, anthropologists had realized that culture was also
systematic, with constraints between different parts.  In other words,
the likelihood that an idea will be invented or imported by a culture
depends to a degree on the kinds of ideas the people in the culture 
are already familiar with.

        The current view in anthropology (current in the 70s, that is)
is that language and culture do influence each other, but that the
influence is much weaker, more subtle, and more bidirectional, than
the Whorfian hypothesis suggested.

------------------------------

Date: 3 May 83 17:31:01 EDT  (Tue)
From: Fred Blonder <fred.umcp-cs@UDel-Relay>
Subject: Artificial Languages

[Fred has pointed out that the "DuckSpeak" I cited was officially
called Newspeak in Orwell's 1984.  -- KIL]

Also: are you aware of Esperanto? It's grammar (only 16 rules) allows
any word to function as any part of speech by an appropriate change
to its suffix.

------------------------------

[We are now linked to the Usenet net.ai discussion, which is
more nearly real-time than the AIList digest.  The following
is evidently from a continuing discussion, and I apologize to
the author if he did not expect such a wide audience.  A more 
formal submission system might be arranged if Usenet members
want both private and public discussions, or if they object to
receiving digested copies of previously seen messages.  The
possibility of forwarding undigested AIList submissions to Usenet
is being considered. -- KIL]


Date: 1 May 83 22:31:14-PDT (Sun)
From: decvax!utzoo!watmath!bstempleton @ Ucb-Vax
Subject: Putting programmers out of a Job?

I hope the person who stated that this self programming computer
project will eliminate the need for programmers is not on the AI
project.  If so they should fire him/her and get somebody who is a
good programmer.  Programming is a highly creative art that uses some
highly complex technological tools.  No AI project will put a good
programmer out of a job without being able to pass a Turing test
first.  This is because a good programmer spends more time designing
than coding.

In fact, I would be all for a machine which I could tell to write a
program to traverse a data structure doing this and that to it.  It
would get rid of all the tedious stuff, and I would be able to produce
all kinds of wonderful programs.  Out of a job?  Hardly - I'd be rich,
and so would a lot of other people, notably those on AI projects.

I doubt that ten years will show a computer that can do things like
design (or invent) things like screen editors, VisiCalc(TM),
relational databases and compilers.  If it could do all that, it's
intelligent - not just a self-programming machine.

------------------------------

End of AIList Digest
********************
-------

∂16-May-83  0058	LAWS%SRI-AI.ARPA@SCORE 	AIList Digest   V1 #4
Received: from SU-SCORE by SU-AI with PUP; 16-May-83 00:57 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Mon 16 May 83 00:03:35-PDT
Date: Sunday, May 15, 1983 9:33PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #4
To: AIList@SRI-AI


AIList Digest            Monday, 16 May 1983        Volume 1 : Issue 4

Today's Topics:
  Research Posts in AI at Edinburgh University
  AI at the AAAS
  Expert System for IC Processiong
  Re: Artificial languages
  Loglan
  Excerpt about AI from a NYTimes interview with Stanislaw Lem
----------------------------------------------------------------------

Date: Wednesday, 11-May-83  16:29:52-BST
From: DAVE     FHL (on ERCC DEC-10)  <bowen@edxa>
Reply-to: bowen%edxa%ucl-cs@isid
Subject: Research Posts in AI at Edinburgh University

--------



                        UNIVERSITY OF EDINBURGH
                 DEPARTMENT OF ARTIFICIAL INTELLIGENCE


                           2 Research Fellows


Applications are invited for these SERC-funded posts, tenable from
July 1 1983 or a mutually agreed date, to work on a project to
formulate techniques whereby an intelligent knowledge-based training
system can deduce what a user's aims are. Experience of UNIX and
programming is essential.  Experience of PROLOG or LISP and some
knowledge of IKBS techniques would be an advantage.  The posts are
tenable for three years, on the R1A scale (6375-11105 pounds).
Candidates should have a higher degree in a relevant discipline, such
as Mathematics, Computer Science or Experimental Psychology.
Applications, including a curriculum vitae and names of two referees,
should be sent to The Secretary's Office, University of Edinburgh, Old
College, South Bridge, Edinburgh EH8 9YL, Scotland, (from whom further
details can be obtained), by 31 May 1983.


------------------------------

Date: 13 May 83 10:53:04 EDT
From: DAVID.LEWIN  <LEWIN@CMU-CS-C>
Subject: AI at the AAAS

The following session at the upcoming AAAS meeting should be of 
interest to readers of AI-LIST.

ARTIFICIAL INTELLIGENCE: ITS SCIENCE AND APPLICATION
American Association for the Advancement of Science
Annual Meeting- Detroit, MI; Sunday, May 29, 1983

Arranged by: Daniel Berg, Provost--Carnegie-Mellon University
             Raj Reddy, Director--Robotics Institute, CMU

"Robust Man-Machine Communication"
  Jaime Carbonell, CMU

"Artificial Intelligence Applications in Electronic Manufacturing"
  Samuel H. Fuller, Digital Equipment Corp. (Hudson, MA)

"Expert Systems in VLSI Design"
  Mark Stefik, Xerox-PARC

"Science Needs in Artificial Intelligence"
  Nils Nilsson, SRI International

"Medical Applicationns of Artificial Intelligence"
  Jack D. Myers, Univ. of Pittsburgh

"The Application of Strategic Planning and Artificial Inntelligence to
the Management of the Urban Infrastructure"
  Charles Steger, Virginia Polytechnic Inst. & State Univ.

------------------------------

Date: 14 May 1983 2154-PDT (Saturday)
From: ricks%UCBCAD@Berkeley
Subject: Expert System for IC Processiong


I'm about to start preliminary work on an expert system for integrated
circuit processing.  At this time, its not clear whether it will deal
with diagnosing and correcting problems in a process line, or with
designing new process lines.

I would like to know if anybody has done any work in this area, and
what the readers of this list think about building an expert system
for this purpose.

I realize that this letter is somewhat vague, but I'm in the early
stages of this and I'd like to see what has been done and what options
I have.

                        Thanks,

                        Rick L Spickelmier
                        ricks@berkeley

                        University of California
                        Electronics Research Laboratory
                        Cory Hall
                        Berkeley, CA 94720
                        (415) 642-8186

------------------------------

Date: 11 May 1983 19:10 EDT
From: Stephen G. Rowley <SGR @ MIT-MC>
Subject: Artificial languages

Since people seem to be interested in artificial languages and the
Whorfian hypothesis, some words about Loglan might be interesting.
(If that's what started the discussion and I missed it, apologies to
all...)

Loglan is a language invented by J. Brown in the mid-50's to test the 
Whorfian hypothesis with a radically different language.  It's got a
simple grammar believed to be utterly unambiguous, a syntax based on 
predicate calculus, and morpholgy that tells you what "part of speech"
(to stretch a term) a word is from its vowel-consonant pattern.

Of the 14 non-vacuous logical connectives, all are pronounceable in
one syllable.  By comparison, English Dances about a LOT to say some
of them.

There are some books about it, and even a couple of regular journals.
Once upon a time, there was a Loglan mailing list here at MIT, but it
died of lack of interest.

        -SGR

------------------------------

[Here is further info on Loglan culled from Human-Nets. -- KIL]

Date: 11 Dec 1981 2314-PST
From: JSP at WASHINGTON
Subject: Loglan as command language.

  English is optimized to serve as a verbal means of communication 
between intelligences.  It would be highly surprising if it turned out
to be optimal for the much different task of communicating between an 
intelligent (human) and a stupid (computer) via keyboard.  In fact, it
would be surprising if English proved well suited to any sort of 
precise description, given that various mathematical notations, Algol 
and BNF, for example, all originated as attempts to escape the 
ambiguity and opacity of English.  (Correct me if I'm wrong, but I 
seem to recall that Algol was originally a publication language for 
human-human communication, programming applications coming later.)
  Much the same may be said, with less force, for Loglan, which is 
also targeted on human-human communication, albeit with a special 
focus on simplicity and avoidance of syntactic ambiguity.  (Other 
Loglanists might not agree.)
  For those interested, the Loglan Institute is alive and well, if 
rather hard to find pending completion of a revised grammar and word 
morphology.  I'd be happy to correspond with anyone interested in the 
language...  and delighted to hear from any YACCaholic TL subscribers 
interested in working on the grammar...
        --Jeff Prothero

------------------------------

Date: 11 Dec 1981 06:46:30-PST
From: decvax!pur-ee!purdue!kad at Berkeley (Ken Dickey at Purdue CS)
Subject: Loglan

I have received several requests for more information on Loglan, a 
language which may be ideal for man-computer communication.  Here is a
brief description:


Synopsis: (from the book jacket of LOGLAN 1: A LOGICAL LANGUAGE, James
C. Brown, Third Edition)

        Loglan is a language designed to test the Sapir-Whorf 
hypothesis that the natural languages limit human thought.  It does 
this so by pushing those limits outward in predictable directions by:

*incorporating the notational elegance of symbolic logic (it is 
TRANSFORMATIONALLY POWERFUL);

*forcing the fewest possible assumptions about "reality" on its 
speakers (it is METAPHYSICALLY PARSIMONIOUS);

*removing all structural sources of ambiguity (in Loglon anything, no 
matter how implausible, can be said clearly; for it is SYNTACTICALLY 
UNAMBIGUOUS);

*generalizing all semantic operations (whatever can be done to any 
Loglan word can be done to every Loglan word; for it is SEMANTICALLY 
NON-RESTRICTIVE);

*deriving its basic word-stock from eight natural languages, including
three Oriental ones (it is therefore CULTURALLY NEUTRAL);


Notes:
        Loglan has a small grammar (an order of magnitude smaller than
any "natural" grammar).

        It is isomorphic (spelled phonetically-- all punctuation is 
spoken).  π
        There are a set of rules for word usage so that words are 
uniquely resolvable (No "Marzee Dotes" problem).

        The most frequently used grammatical operators are the 
shortest words.

        The word stock is derived from eight languages (Hindi, 
Japanese, Mandarin Chinese, English, Spanish, Russian, French, and 
German), weighted by usage for recognizability.  I.e. using Loglan 
rules to satisfy form, words are made up to be mnemonic to most of the
worlds speakers.

        Loglan "predicates" are, in a sense, complete.  For example 
MATMA means X is the MOTHER of Y by father W.  Joan matma == Joan is 
the mother of .. by .. == Joan is a mother.  Matma Paul == Paul's 
mother, etc.  These "slots" can change positions by means of 
operators.

        Modifiers precede modified words.  Garfs school => a garfs 
type of school (a school FOR garfs) as opposed to a school BELONGING 
to garfs.

        Language assumptions can be quite different. For example, 
there are a number of words for "yes", meaning "yes, I will", "yes, I 
agree", etc.

        Although considered an experimental tool, there are people 
that actually speak it.  (It is a USEFUL tool).


Pointer: The Loglan Institute
         2261 Soledad Rancho Road
         San Diego, California 92109


As I am an armchair linguist, you should reference the above pointer 
for more information.


                                        -Ken

------------------------------


Date: 8 Apr 1982 01:32:44-PST
From: ihnss!houxi!u1100a!rick at Berkeley
Subject: Loglan

A while ago somebody (I believe it was in fa.human-nets during a 
discussion of sexism in personal pronouns) asked the question "What 
does Loglan do about gender?".

As usual with such questions the answer is not easy to describe in a 
few words.  But to simplify somewhat, Loglan has no concept of 
grammatical gender at all.  The language has a series of five words 
that act (approximately) like third person pronouns, but there is no 
notion of sex associated with them.

Loglan also does away with most of the usual grammatical categories, 
such as "nouns", "adjectives" and "verbs".  In their place it has a 
single category called "predicate".  Thus the loglan word "blanu" can 
be variously translated as "blue" (an adjective), "is a blue thing" (a
verb-like usage), and "blue thing" (a noun-like usage).

Loglan is uninflected. It has no declensions or conjugations.  But it 
does have a flock of "little words" that serve various grammatical and
punctuational purposes.  They also take the place of such affixes as 
"-ness" (as in "blueness") in English.

More information about Loglan can be gotten by writing to:

                        The Loglan Institute, Inc.
                        2261 Soledad Rancho Road
                        San Diego, CA 92109

------------------------------

Date: Sun 15 May 83 12:17:41-PDT
From: Robert Amsler <AMSLER@SRI-AI.ARPA>
Subject: Excerpt about AI from a NYTimes interview with Stanislaw Lem

Sunday, March 20th, NYTimes Book Review Interview with Stanislaw Lem
by Peter Engel

Interviewer: "You mentioned robots, and certainly one of the most 
important themes in your writing is the equality of men and robots as
thinking, sentient beings.  Do you feel that artificial intelligence
at this level will be achieved within the forseeable future?"


Lem: "My opinion is that in roughly 100 years we will arrive at an
artificial intelligence that is more intelligent and reasonable than
human intelligence, but it will be completely different.  There are no
signs indicating that computers will in certain fields become equal to
men. You should not be misled by the fact that you can play chess with
a computer. If you want to accomplish certain individual tasks,
computers are fine. But when you are talking about psychological
matters, every one of us carries in his head the heritage of the
armored fish, the dinosaurs, and other mammals. These limitations do
not exist outside the domain of biological evolution. And there's no
reason why we should imitate them -- the very idea is silly. In the 
field of mechanics it would be the same as if the Arabs were to say
they didn't want airplanes and automobiles, only improved camels. Or
that you shouldn't supply automobiles with wheels, that you must
invent mechanical legs.

I'm going to show you a book. 'Golem XIV' is going to be published
next year in America. It's a story about the construction of a
supercomputer and how it didn't want to solve the military task it was
given, the purpose it had been constructed for in the first place.  So
it started to devote itself to higher philosophical problems. There
are two stories in 'Golem XIV,' two lectures for scientists. In the
first Golem talks about humans and the way it sees them, in the second
about itself. It tries to explain that it's already arrived at a level
of biological evolution will never reach on it own (sic). It's on the
lowest rung of a ladder, and above it there might exist now or in the
future more potent intelligences. Golem does not know whether there 
are any bounds in its progress to the upper sphere. And when it, in a
manner of speaking, takes leave of man, it is primarily for the
purpose of advancing further up this ladder.

In my own view, man will probably never be able to understand and
recognize everything directly, but in an indirect manner he will be
able to achieve command of everything if he constructs intelligence
amplifiers to fulfill his wishes. Like a small child, he will then be 
receiving gifts. But he will not be able to perceive the world
directly, like a small child who is given an electric railway. The
child can play with it, he can even dismantle it, but he will not
understand Maxwell's theory of electricity. The main difference is
that the child will one day become an adult, and then if he wants he
will eventually study and understand Maxwell's theory. But we will
never grow up any further. We will only be able to receive gifts from
the giants of intelligence that we'll be able to build.  There is a
limit to human perception, and beyond this horizon the fruit of
observation will be gleaned from other beings, research machines or
whatever. Progress may continue, but we will somehow be staying
behind."

------------------------------

End of AIList Digest
********************

∂18-May-83  1313	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #5  
Received: from SU-SCORE by SU-AI with TCP/SMTP; 18 May 83  13:13:28 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Wed 18 May 83 12:42:06-PDT
Date: Wednesday, May 18, 1983 9:33AM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #5
To: AIList@SRI-AI


AIList Digest           Wednesday, 18 May 1983      Volume 1 : Issue 5

Today's Topics:
  AI in Business Symposium
  Expert Systems Reports
----------------------------------------------------------------------

Date: Mon 16 May 83 06:34:58-PDT
From: Ted Markowitz <G.TJM@SU-SCORE.ARPA>
Subject: AI in Business Symposium

[I apologize for not getting this out before the conference, but my
net connection has been down since Monday morning.  -- KIL]


I'd just like to remind folks in the NYC area that NYU is offering a
3-day sysmposium on AI in Business. Among those to speak will be 
Robert Bobrow, Rich Duda, Harry Pople, John McDermott, and Roger
Schank.  Several of the lectures deal with NLP and expert systems both
in the abstract and as they apply in the real worls.

The symposium is on 5/18-20 at NYU (100 Trinity Place, NY, NY 10006).
For more information call 212-285-6120.

--Ted

------------------------------

Date: Tue 17 May 83 23:18:48-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert Systems Reports

Here is a selection of recent technical reports relating to expert 
systems and hierarchical inference.  I would appreciate additions, 
particularly any relating to expert systems for image understanding 
and general vision.

                                -- Ken Laws


J.S. Aikins, J.C. Kunz, E.H. Shortliffe, and R.J. Fallat, PUFF: An 
Expert System for Interpretation of Pulmonary Function Data.  Stanford
U. Comp. Sci. Dept., STAN-CS-82-931; Stanford U. Comp. Sci.  Dept.
Heuristic Programming Project, HPP-82-013, 1982, 21p.

C. Apte, Expert Knowledge Management for Multi-Level Modelling.  
Rutgers U. Comp. Sci. Res. Lab., LCSR-TR-41, 1982.

B.G. Buchanan and R.O. Duda, Principles of Rule Based Expert Systems.
Stanford U. Comp. Sci. Dept., STAN-CS-82-926; Stanford U. Comp. Sci.  
Dept. Heuristic Programming Project, HPP-82-014, 1982, 55p.

B.G. Buchanan, Partial Bibliography of Work on Expert Systems.  
Stanford U. Comp. Sci. Dept., STAN-CS-82-953; Stanford U. Comp. Sci.  
Dept. Heuristic Programming Project, HPP-82-30, 1982, 13p.

A. Bundy and B. Silver, A Critical Survey of Rule Learning Programs.  
Edinburgh U. A.I. Dept., Res. Paper 169, 1982.

R. Davis, Expert Systems: Where are We? And Where Do We Go from Here?
M.I.T. A.I. Lab., Memo 665, 1982.

T.G. Dietterich, B. London, K. Clarkson, and G. Dromey, Learning and 
Inductive Inference (a section of the Handbook of Artificial 
Intelligence, edited by Paul R.  Cohen and Edward A. Feigenbaum).  
Stanford U. Comp. Sci. Dept., STAN-CS-82-913; Stanford U. Comp. Sci.  
Dept. Heuristic Programming Project, HPP-82-010, 1982, 215p.

G.A. Drastal and C.A. Kulikowski, Knowledge Based Acquisition of Rules
for Medical Diagnosis.  Rutgers U. Comp. Sci. Res. Lab., CBM-TM-97,
1982.

N.V. Findler, An Expert Subsystem Based on Generalized Production 
Rules.  Arizona State U. Comp. Sci. Dept., TR-82-003, 1982.

N.V. Findler and R. Lo, A Note on the Functional Estimation of Values 
of Hidden Variables--An Extended Module for Expert Systems.  Arizona 
State U. Comp. Sci. Dept., TR-82-004, 1982.

K.E. Huff and V.R. Lesser, Knowledge Based Command Understanding: An 
Example for the Software Development Environment.  Massachusetts U.  
Comp. & Info. Sci. Dept., COINS Tech.Rpt. 82-06, 1982.

J.K. Kastner, S.M. Weiss, and C.A. Kulikowske, Treatment Selection and
Explanation in Expert Medical Consultation: Application to a Model of
Ocular Herpes Simplex.  Rutgers U. Comp. Sci. Res. Lab., CBM-TR-132,
1982.

R.M. Keller, A Survey of Research in Strategy Acquisition.  Rutgers U.
Comp. Sci. Dept., DCS-TR-115, 1982.

V.E. Kelly and L.I. Steinberg, The Critter System: Analyzing Digital 
Circuits by Propagating Behaviors and Specifications.  Rutgers U.  
Comp. Sci. Res. Lab., LCSR-TR-030, 1982.

J.J. King, An Investigation of Expert Systems Technology for Automated
Troubleshooting of Scientific Instrumentation.  Hewlett Packard Co.
Comp. Sci. Lab., CSL-82-012; Hewlett Packard Co. Comp.  Res. Center,
CRC-TR-82-002, 1982.

J.J. King, Artificial Intelligence Techniques for Device 
Troubleshooting.  Hewlett Packard Co. Comp. Sci. Lab., CSL-82-009; 
Hewlett Packard Co. Comp. Res. Center, CRC-TR-82-004, 1982.

G.M.E. Lafue and T.M. Mitchell, Data Base Management Systems and 
Expert Systems for CAD.  Rutgers U. Comp. Sci. Res. Lab., LCSR-TR-028,
1982.

R.J. Lytle, Site Characterization using Knowledge Engineering -- An 
Approach for Improving Future Performance.  Cal U. Lawrence Livermore 
Lab., UCID-19560, 1982.

T.M. Mitchell, P.E. Utgoff, and R. Banerji, Learning by 
Experimentation: Acquiring and Modifying Problem Solving Heuristics.  
Rutgers U. Comp. Sci. Res. Lab., LCSR-TR-31, 1982.

P.G. Politakis, Using Empirical Analysis to Refine Expert System 
Knowledge Bases.  Rutgers U. Comp. Sci. Res. Lab., CBM-TR-130, Ph.D.  
Thesis, 1982.

M.D. Rychener, Approaches to Knowledge Acquisition: The Instructable 
Production System Project.  Carnegie Mellon U. Comp. Sci. Dept., 1981.

R.D. Schachter, An Incentive Approach to Eliciting Probabilities.  
Cal. U., Berkeley. O.R. Center, ORC 82-09, 1982.

E.H. Shortliffe and L.M. Fagan, Expert Systems Research: Modeling the 
Medical Decision Making Process.  Stanford U. Comp. Sci. Dept., 
STAN-CS-82-932; Stanford U. Comp. Sci. Dept. Heuristic Programming 
Project, HPP-82-003, 1982, 23p.

M. Suwa, A.C. Scott, and E.H. Shortliffe, An Approach to Verifying 
Completeness and Consistency in a Rule Based Expert System.  Stanford 
U. Comp. Sci. Dept., STAN-CS-82-922, 1982, 19p.

J.A. Wald and C.J. Colbourn, Steiner Trees, Partial 2-Trees, and 
Minimum IFI Networks.  Saskatchewan U. Computational Sci. Dept., Rpt.
82-06, 1982.

J.A. Wald and C.J. Colbourn, Steiner Trees in Probabilistic Networks.
Saskatchewan U. Computational Sci. Dept., Rpt. 82-07, 1982.

A. Walker, Automatic Generation of Explanations of Results from 
Knowledge Bases.  IBM Watson Res. Center, RJ 3481, 1982.

J.W. Wallis and E.H. Shortliffe, Explanatory Power for Medical Expert 
Systems: Studies in the Representation of Causal Relationships for 
Clinical Consultation.  Stanford U. Comp. Sci. Dept., STAN-CS-82-923, 
1982, 37p.

S. Weiss, C. Kulikowske, C. Apte, and M. Uschold, Building Expert 
Systems for Controlling Complex Programs.  Rutgers U. Comp. Sci. Res.
Lab., LCSR-TR-40, 1982.

Y. Yuchuan and C.A. Kulikowske, Multiple Strategies of Reasoning for 
Expert Systems.  Rutgers U. Comp. Sci. Res. Lab., CBM-TR-131, 1982.

------------------------------

End of AIList Digest
********************

∂22-May-83  0145	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #6  
Received: from SU-SCORE by SU-AI with TCP/SMTP; 22 May 83  01:45:10 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Sun 22 May 83 01:34:04-PDT
Date: Saturday, May 21, 1983 11:11PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #6
To: AIList@SRI-AI


AIList Digest            Sunday, 22 May 1983        Volume 1 : Issue 6

Today's Topics:
  Lectureships at Edinburgh University
  Distributed Problem-Solving: An Annotated Bibliography
  Loglan Cross Reference
  Re: Esperanto and LOGLAN
  Latest AI Journal Issue
  IBM EPISTLE System
  Software Copyright Info
----------------------------------------------------------------------

Date: Thursday, 12-May-83  10:31:00-BST
From: DAVE     FHL (on ERCC DEC-10)  <bowen@edxa>
Reply-to: bowen%edxa%ucl-cs@isid
Subject: Lectureships at Edinburgh University

--------



                        UNIVERSITY OF EDINBURGH
             INFORMATION TECHNOLOGY - VLSI design and IKBS.


           1 Lecturer in Artificial Intelligence (ref IT2/1)
           1 Lecturer in Computer Science (ref IT2/2)
           1 Lecturer in Electrical Engineering (ref IT2/3)


These new lectureships are being funded to expand the M.Sc. teaching 
carried out by the 3 departments in collaboration.  The posts are 
available from 1 October 83, but the starting dates could be adjusted
to attract the right candidates. These are tenure track posts.


The teaching and research interests sought are:  Artificial
Intelligence: Intelligent Knowledge-Based Systems.  Computer Science:
Probably VLSI design, but need not be so.  Electrical Engineering:
VLSI design.


Salary scales (under review): 6375-13505 pounds p.a. according to age,
qualifications and experience.

For further details write to the Secretary to the University, Old 
College, South Bridge, Edinburgh EH8 9YL, Scotland, quoting one or
more reference numbers as required (IT2/1-3 as above).

Applications (3 copies) including CV and names and adresses of 3 
referees should be sent to the same address. If you have applied in 
response to the previous Computer Science advert, ref.  1055, then you
will be considered for posts IT2/2 and IT2/3 without further 
application.

------------------------------

Date: Tue 17 May 83 23:14:55-PDT
From: Vineet Singh <vsingh@SUMEX-AIM.ARPA>
Subject: Distributed Problem-Solving: An Annotated Bibliography


This is to request contributions to an annotated bibliography of 
papers in *Distributed Problem-Solving* that I am currently compiling.
My plan is to make the bibliography available to anybody that is 
interested in it at any stage in its compilation.  Papers will be from
many diverse areas: Artificial Intelligence, Computer Systems 
(especially Distributed Systems and Multiprocessors), Analysis of 
Algorithms, Economics, Organizational Theory, etc.

Some miscellaneous comments.  My definition of distributed 
problem-solving is a very general one, namely "the process of many 
entities engaged in solving a problem", so feel free to send a 
contribution if you are not sure that a paper is suitable for this 
bibliography.  I also encourage you to make short annotations; more 
than 5 sentences is long.  All annotations in the bibliography will 
carry a reference to the author.  If your bibliography entries are in 
Scribe format that's great because the entire bibliography will be in 
Scribe.

Vineet Singh (VSINGH@SUMEX-AIM.ARPA)

------------------------------

Date: 18 May 83 17:46:05-PDT (Wed)
From: harpo!seismo!rlgvax!jack @ Ucb-Vax
Subject: Loglan Cross Reference

People interested by submissions on Loglan should see also net.nlang.

------------------------------

Date: 16 May 1983 1817-EDT (Monday)
From: Robert.Frederking@CMU-CS-A (C410RF60)
Subject: Re: Esperanto and LOGLAN

        I'm curious about something mentioned about these languages:  
has anyone made any claims regarding the Sapir-Whorf hypothesis and
the fluent users of these languages?

        Bob

------------------------------

Date: 11 May 1983 2151-EDT
From: NUDEL.CL@RUTGERS (NUDEL.CL)
Subject: Latest AI Journal Issue

[I just pulled this and the following messages from various local
BBoards that Mabry Tyson makes available at SRI-AI.  -- KIL]

[...]
I just received a copy of the March issue of the AI journal from North
Holland and I see that the talk Haralick gave here Monday appears in
that issue of AI as well.  You may like to look at the March AI in
general - it is a special issue devoted to Search and Heuristics (in
memory of John Gaschnig), and covers recent AI research of a more
formal nature than the usual AI variety. It looks like it will become
something of a classic, with papers by Pearl, Simon, Karp, Lenat, 
Purdom (who also spoke here a while ago), yours-truly, Kanal, Nau and
Haralick.

Bernard

------------------------------

Date: 9 May 83 22:57:31 EDT
From: John Stuckey @CMUC
Subject: Presentation of IBM EPISTLE system

Dr. Lance A. Miller, director of the Language and Knowledge Systems 
Laboratory of IBM's Thomas J. Watson Research Center, Yorktown 
Heights, will be on campus Tuesday, 10 May.  He will give a 
presentation of the lab's EPISTLE system for language analysis from 2 
to 3 pm in Gregg Hall, PH 100.  The presentation is entitled "On Text 
Composition and Quality: The IBM EPISTLE system's alternatives to
NEWSPEAK."

Abstract:
  The immediate goals of the EPISTLE system are to provide useful 
text-critiquing functions for assuring the "quality" of written 
English text.  Today the system plunges through the densest prose and 
provides an "automatic unique parse" description of the surface 
syntactic structure of each sentence.  This description provides the 
basis for the present capability to detect almost all errors of 
grammar and, shortly, to raise its editorial eyebrow at a large number
of stylistic questionables (e.g., a la @i<Chicago Manual of Style>).
  This present Orwellian capability to render binary evaluative 
decisions on arbitrary text does not, however, reflect the ultimate 
design goals of the system.  These, the present state, and the 
internal workings of the system will be discussed.

------------------------------

Date: 16 May 1983 17:28:42-EDT
From: Michael.Young at CMU-CS-SPICE
Subject: software copyright info

IEEE Computer Graphics and Applications January/Februrary issue of
this year has an interesting article on software copyrighting and
patents, and includes loads of references to other cases and
references.  It is a well-documented case history and summary of the
current situation for anyone concerned with legal issues.

------------------------------

End of AIList Digest
********************

∂22-May-83  1319	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #7  
Received: from SU-SCORE by SU-AI with TCP/SMTP; 22 May 83  13:17:50 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Sun 22 May 83 13:21:00-PDT
Date: Sunday, May 22, 1983 10:39AM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #7
To: AIList@SRI-AI


AIList Digest            Sunday, 22 May 1983        Volume 1 : Issue 7

Today's Topics:
  LISP for VAX VMS
  AI Job
  Phil-Sci Mailing List (2)
  Computer Resident Intelligent Entity (CRIE)  [Long Article]
----------------------------------------------------------------------

Date: 19 May 1983 09:19 cdt
From: Silverman.CST at HI-MULTICS
Subject: lisp for vax vms

We are trying to find out what implementations of lisp exist that we
can run on our vax under vms.  Any information about existing systems
and how to get them would be appreciated.  Reply to Silverman at
HI-Multics.

------------------------------

Date: Thu 19 May 83 10:41:33-PDT
From: Gordon Novak <NOVAK@SU-SCORE.ARPA>
Subject: AI Job

Two individuals with strong CS background and specific interest in
A.I.  sought for development of a modern air traffic control system
for the whole U.S.  Position located on East Coast in mid-Atlantic
states.  Contact Jay. R. Kronfeld, Kronfeld & Young Inc., 412 Main
St., Ridgefield, Conn. 06877.  (203) 438-0478

------------------------------

Date: 9 May 1983 1047-EDT
From: Don <WATROUS@RUTGERS>
Subject: Prolog, Phil-Sci mailing lists

[...]

Also of interest to local readers might be the local Phil-Sci BBoard, 
which receives the Philosophy-of-Science mailing list.
Here is its description:

PHILOSOPHY-OF-SCIENCE@MIT-MC
         (or PHIL-SCI@MIT-MC)

   An immediate redistribution list discussing philosophy of science
with
   emphasis on its relevance for Artificial Intelligence.

   The list is archived@MIT-OZ in the twenex mail file:
   OZ:SRC:<COMMON>PHILOSOPHY-OF-SCIENCE-ARCHIVES.TXT.1

   All requests to be added to or deleted from this list, problems,
questions,
   etc., should be sent to PHILOSOPHY-OF-SCIENCE-REQUEST@MIT-MC (or
   PHIL-SCI-REQUEST@MIT-MC).

   Coordinator: John Mallery <JCMa@MIT-MC>


------------------------------

Date: 10 May 1983  01:36 EDT (Tue)
From: ←Bob <Carter@RUTGERS>
Subject: Phil-Sci Readers, Please Note


Hi,

Before FTP'ing the archive mentioned by Don's Phil-Sci announcement,

         [OZ]SRC:<COMMON>PHILOSOPHY-OF-SCIENCE-ARCHIVE.TXT

please note that this OZ file is written in ZMAIL format, and is not 
readable with either MM or BBOARD.EXE.  ZMAIL is a LISPMachine mail 
reader from MIT.  You can TYPE or edit ZMAIL files, but they are 
sometimes pretty hard to follow that way.

If you are interested in looking at back issues of this list in a more
civilized fashion, I have been following it from the beginning, and 
have a home-built archive archived (howzat again?) on GREEN, as 
I-PHIL-SCI.BABYL through VI-PHIL-SCI.BABYL. These files have been 
reformatted for convenient reading with the BABYL, an EMACS-based 
mail-reader available at Rutgers.  Also archived on GREEN is a help 
file named

              USING-BABYL-TO-READ-PHIL-SCI.HLP.

Please do not attempt to RETRIEVE this stuff; drop me a note instead.
These files total several hundred pages and would swamp my GREEN 
directory if restored to disk all at once.

←Bob

------------------------------

Date: Tue, 17 May 83 19:12:45 EDT
From: Mark Weiser <weiser@NRL-CSS>
Subject: Computer Resident Intelligent Entity (CRIE)  [Long Article]

1.  The Operating System World

     An interesting test-bed for Artificial Intelligence (AI) methods 
is the world of computer systems.  Previous work has focused on 
limited particular subdomains, such as digital design [Sussman 77], 
computer configuration [McDermott & Steele 81], and programming 
knowledge [Waters 82].  Even these restricted domains have proven 
themselves very rich areas for AI techniques.  However, no one has 
(yet) gone far enough in applying Artificial Intelligence techniques 
to computer systems.  The far out question I'm thinking of is: what 
sort of entity would live in the ecological niche supplied by the 
computer system environment?

     Organisms evolved in the biological world have been shaped 
primarily by evolutionary forces.  They cannot be wholistically 
studied without considering, for instance, their energy intake and 
expenditure and their necessity for reproduction [Kormondy 69].  These
particular constraints are biological universals, but are not 
necessariy paradigmatic for non-biological intelligent organisms.  
Consider human beings, necessarily the prime subjects of those 
studying intelligent biological organisms.  We* are specifically 
attuned to a particular environmental niche by virtue of our sensory 
systems, our cognitive processing capabilities, and our motor systems.
Dreyfus [Dreyfus 72], argues from this that machines cannot be 
intelligent.  Our discussion begins from a view more akin to 
Weizenbaum's [Weizenbaum 76]: a machine intelligence is an alien 
intelligence.  What sort of sensory system is appropriate to this 
particular alien intelligence?

2.  Traditional perceptual interfaces to the computer world

     The usual way of observing a computer system is to take 
snapshots.  Such a snapshot might be a list of the active jobs on the 
system, or the names and sizes of the files, or the contents of a 
file.  If more information than a snapshot is needed, then many 
snapshots are packed together to create a "history" of system 
behavior.

     Unfortunately a history of snapshots is not a history of the 
system.  This is well known in performance modeling of computer 
systems, where a snapshot of a system every 15 minutes is useless.  
Instead an average over the 15 minute interval is the proper level of 
information gathering.  The problem with snapshots is their time 
domain is fixed externally and irrelevantly to the world being 
monitored.

     It is sometimes possible to recreate the behavior of system 
objects by examining a stream of snapshots of the object's states.  
But this is the wrong approach to the problem.  Rather ask: what sort 
of perceptual system would best notice the important objects 
(invariants) in a computer system world [Gibson 66]?  A snapshot 
contains irrelevant information, and is gathered at irrelevant times.

3.  New perceptual interfaces

     Imagine your favorite computer system.  It consists of objects 
changing in time: files, programs, processes, descriptions, data 
flowing hither and yon--a very active world.  A retinal level 
description of the biological world would display a similar confusion 
of unintegrated sensations.  But our retina wins because it is part of
a perceptual system which quickly transforms the input flux to 
invariant forms.

     Let's ignore the back end (invariant deduction end) of a computer
perceptual system for a moment, and consider just the "retinal" end.  
What kind of raw data is available about important system activities?
On the one hand are the contents of files, data structures, program 
descriptions, etc.  The understanding of these items is relatively 
well studied--as a first approximation it is what programs do.  The 
hard problem is perceiving the information flux.  Values in memory and
files are constantly changing and often it is the changes themselves 
which are interesting, more than from what the value was changed or to
what it was changed.  For instance, noticing someone poking around in 
my files is a "who is looking" question rather than a data value 
question.  Noticing important changes in the system requires an 
event-based perceptual system.

     Activities occur in widely distributed places in a computer 
system.  User programs, file systems, system data structures, may all 
be relevant to the intelligent computer resident entity.  The human 
visual system has evolved to make good use of the transparency of our 
atmosphere to electromagnetic radiation of a certain wavelength to 
allow us to perceive activities in a wide range around us. A great 
deal of our intelligence is oriented towards the three dimensional 
space which we can survey, because it is here that we have effortless 
access to information about the objects which can immediately affect 
us [Kaplan 78].

     A computer entity must also have effortless access to information
about objects in its area of prime concern.  Its perceptual apparatus 
should be attuned to changes in those entities so interesting events 
are immediately apparent to it.  With our current technology** one 
solution is to distribute the perceptual apparatus of the entity onto 
the objects of concern.  This is radically different from any solution
chosen by nature, but the computer system world is radically different
from the biological world.  It amounts to daemon-based perception.

     The perceptual mechanism of a computer resident intelligent 
entity (CRIE) WOULD be similar to production rules [Forgy 81] and 
daemons [Rieger 78].  A CRIE retina would have two distinctive 
features: (1) it is made up of demons, which are (2) attached to the 
objects being observed.

     A CRIE perceptual system is quiescent until some event occurs to 
which it is attuned.  When that happens, a CRIE reacts by invoking 
various reasoning and acquisition daemons associated with that event.
These reasoning and acquisition daemons are modular pieces of 
information which are the low level meaning of events within a CRIE.  
The daemons not only watch for events occurring on the system, but 
also can observe larger contexts (such as themselves).

     To conclude: Artificial Intelligence research has, as one goal, 
understanding how to embed intelligence in a machine.  The criticisms 
of AI from Dreyfus, Weizenbaum, and others can be used constructively 
to design an intelligence appropriate to a machine.  This approach to 
intelligent system design leads to new kinds of design constraints for
computer perceptual systems, and gives new meaning to the term 
"computer vision".

FOOTNOTES

   *With apologies to those readers who are not human beings.
  **Implementation issues are important for the design of any intelli-
gent entity.  Why are our eyes in our head?

REFERENCES

[Dreyfus 72]
     Dreyfus, Hubert, What Computers Can't Do, Harper and Row, 1972.

[Forgy 81]
     Forgy, C. L., OPS5 User's Manual, Carnegie-Mellon University
     CMU-CS-78-116, 1981.

[Gibson 66]
     Gibson, James J., The Senses Considered as Perceptual Systems,
     Houghton Mifflin Company, 1966.

[Kaplan 78]
     Kaplan, R., The green experience, in Humanscape: environments for
     people, ed. S. Kaplan and R. Kaplan, Duxbury Press, North
     Scituate, Mass., 1978.

[Kormondy 69]
     Kormondy, Edward J., Concepts of Ecology, Prentice-Hall, 1969.

[McDermott & Steele 81]
     McDermott, J. and Steele, B., Extending a Knowledge-Based System
     to Deal with Ad Hoc Constraints, Proc. IJCAI-81, Vancouver, BC,
     1981.

[Rieger 78]
     Rieger, C., Spontaneous Computation and Its Role in AI Modelling,
     in Pattern-Directed Inference Systems, ed. Waterman & Hayes-Roth,
     Academic Press, New York, 1978.

[Sussman 77]
     Sussman, G., Electrical Design: A Problem for Artificial
     Intelligence Research, Proc. IJCAI5, Cambridge, MA, 1977.

[Waters 82]
     Waters, R. C., The Programmer's Apprentice: Knowledge Based
     Program Editing, IEEE Trans. on Software Eng. SE-8, 1, January
     1982.

[Weizenbaum 76]
     Weizenbaum, Joseph, Computer Power and Human Reason, W.H. Freeman
     and Company, 1976.


[Editors comment:

Mark doesn't seem to be asking about the natural course of evolution
in a digital environment, although that is also an interesting
question.  It is not clear to me whether he is proposing a life form
with the usual survival goals, or a monitoring system built by design
and serving some useful purpose.  Since it is difficult to discuss
such a thing without knowing its purpose, I suggest that anyone
responding state his own assumptions or teleology.

I think the new LOOPS language/environment at Xerox offers much of the
"instrumentation capability" that Mark's CRIE needs.  The software
probes can be attached to any variable a posteriori, in the manner of
a dynamic debugger.  This opens up a world of data-based (or dataflow)
techniques integrated with rule-based and other AI techniques.

                                        -- KIL]

------------------------------

End of AIList Digest
********************

∂22-May-83  1248	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #8  
Received: from SU-SCORE by SU-AI with TCP/SMTP; 22 May 83  12:47:16 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Sun 22 May 83 12:50:15-PDT
Date: Sunday, May 22, 1983 11:16AM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #8
To: AIList@SRI-AI


AIList Digest            Sunday, 22 May 1983        Volume 1 : Issue 8

Today's Topics:
  1984 IEEE Logic Programming Symposium
  More Expert Systems Reports
  Requests for Addresses (2)
  Sources for Reports  [Long List]
----------------------------------------------------------------------

Date: Mon 16 May 83 11:08:44-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: 1984 IEEE Logic Programming Symposium

CALL FOR PAPERS

                 The 1984 International Symposium on
                          LOGIC PROGRAMMING

            Atlantic City, New Jersey, February 6-9, 1984

                Sponsored by the IEEE Computer Society
          and its Technical Committee on Computer Languages

The symposium will consider fundamental principles and important 
innovations in the design, definition, and implementation of logic 
programming systems and applications. Of special interest are papers 
related to parallel processing. Other topics of interest include (but 
are not limited to): distributed control schemes, FGCS, novel 
implementation techniques, performance issues, expert systems, natural
language processing and systems programming.

Please send ten copies of an 8- to 20-page, double spaced, typed 
manuscript, including a 200-250 word abstract and figures to:

                Doug DeGroot
                Program Chairman
                IBM Thomas J. Watson Research Center
                P.O. Box 218
                Yorktown Heights, New York 10598

                         Technical Committee

Jacques Cohen (Brandeis) Fernando Pereira (SRI International) Doug
DeGroot (IBM Yorktown) Alan Robinson (Syracuse) Don Dwiggins (Logicon)
Joe Urban (Univ. Southwestern Louisiana) Bob Keller (Utah) Adrian
Walker (IBM San Jose) Jan Kormorowski (Harvard) David Warren (SRI
International) Michael McCord (IBM Yorktown) Jim Weiner (Univ. New
Hampshire)
                 Walter Wilson (IBM DSD Poughkeepsie)

Summaries should explain what is new or interesting about the work and
what has been accomplished. It is important to include specific 
findings or results, and specific comparisons with relevant previous 
work. The committee will consider appropriateness, clarity, 
originality, significance, and overall quality of each manuscript.  
Manuscripts whose length exceeds 20 double spaced, typed pages may 
receive less careful scrutiny than the work merits.

If submissions warrant, the committee will compose a four day program.
---------------------------------------------------------------------

September 1, 1983 is the deadline for the submission of manuscripts.  
Authors will be notified of acceptance or rejection by October 30, 
1983. The accepted papers must e typed on special forms and received 
by the program chairman at the above address by December 15, 1983.  
Authors of accepted papers will be expected to sign a copyright 
release form.

Proceedings will be distributed at the symposium and will be 
subsequently available for purchase from IEEE Computer Society.

        Conference Chairman Technical Chairman
        Joe Urban Doug DeGroot
        Univ. of Southwest Louisiana IBM T. J. Watson Res Ctr
        CS Dept.  P. O. Box 218
        P.O. Box 44330 Yorktown Hts., NY 10598
        Lafayette, LA 70504 (914)945-3497
        (318)231-6304

                Publicity Chairman
                David Warren
                SRI International
                333 Ravenswood Avenue
                Menlo Park, CA 94025
                (415)859-2128

------------------------------

Date: 19 May 83 11:13:56 EDT  (Thu)
From: Dana S. Nau <dsn.umcp-cs@UDel-Relay>
Subject: Re:  Expert Systems Reports


Here are some additions:

Reggia, J. A., Nau, D. S., and Wang, P., Diagnostic Expert Systems
     Based on a Set Covering Model, INTERNAT. JOUR. MAN-MACHINE STU-
     DIES, 1983.  To appear.

Nau, D. S., Expert Computer Systems, COMPUTER 16, 2, pp.  63-85, Feb.
     1983.

Nau, D. S., Reggia, J. A., and Wang, P., Knowledge-Based Problem Solv-
     ing Without Production Rules, IEEE 1983 TRENDS AND APPLICATIONS
     CONFERENCE, May 1983.  To appear.

Reggia, J. A., Wang, P., and Nau, D. S., Minimal Set Covers as a Model
     for Diagnostic Problem Solving, PROC. FIRST IEEE COMPUTER SOCIETY
     INTERNAT. CONF. ON MEDICAL COMPUTER SCI./COMPUTATIONAL MEDICINE,
     Sept. 1982.

------------------------------

Date: Wed 18 May 83 13:55:16-PDT
From: Samuel Holtzman <HOLTZMAN@SUMEX-AIM.ARPA>
Subject: Expert system references.

Ken,
        In the latest AILIST you posted a set of references which were
of interest to me.  Is there any simple way (other than writing
directly to the authors) to get copies of these papers?  Some of them
are published very locally, and might be difficult to obtain.  In
general, a nice feature to add on to each reference would be a net
address to send for copies.

Thanks, Sam Holtzman

------------------------------

Date: 18 May 1983 1454-PDT (Wednesday)
From: ricks%UCBCAD@Berkeley
Subject: AI Memos

I would like to get some memos from the MIT AI Lab and the Stanford
Heuristic Programming Project.  Could somebody send me information on
how to order documents from them?

            Thanks,
            Rick L Spickelmier

            ricks@berkeley

------------------------------

Date: Sat 21 May 83 22:30:00-PDT
From: Laws@SRI-AI <AIList-Request@SRI-AI.ARPA>
Subject: Sources for Reports  [Long List]

Sam is in luck: the reports I listed are all available at the Stanford
Math/CS library.  I have sorted out other AI-related topics from the
Stanford recent acquisitions list and plan to make them available in
some form.  (Direct mailing to the AIList membership seems
inappropriate unless the bibliography is short or there is a need for
a wide spectrum of readers to scan the material for errors and
omissions.  I would be interested in metacomments or personal 
communication on this matter.)

For those who want to order reports, it seems economical to list 
source addresses once rather than every time a new report becomes 
available.  I have culled the following from the Abstracts section of 
the SIGART newsletters for the last few years.  (Only a handful of 
organizations have regularly announced new reports in this forum.)  I
will publish corrections and additions as they are sent in.

                                        -- Ken Laws

Bolt Beranek and Newman, Inc.
50 Moulton Street
Cambridge, MA  02238

Brown University
Department of Computer Science
Box 1910
Providence, RI  02912

Computer Science Department
Carnegie-Mellon University
Pittsburgh, PA  15213

Mathematics Department
Carnegie-Mellon University
Pittsburgh, PA  15213

CMU Robotics Institute
Pittsburgh, PA  15213
Robin Wallace@CMU-10A

Dept. of Computer Science
Duke University
Durham, NC  27706

Fairchild Camera and Instrument Corp.
Laboratory for Artificial Intelligence Research
4001 Miranda Ave.  MS 30-888
Palo Alto, CA  94304

General Electric
Research and Development Center
P.O. Box 43
Schenectady, NY  12301

Computer Science Department
General Motors Research Laboratories
Warren, MI  48090

Hewlett Packard Laboratories
1501 Page Mill Road
Palo Alto, CA  94303

Behavioral Sciences and Linguistics Group
Computer Science Department
IBM Thomas J. Watson Research Center
Yorktown Heights, NY  10598

Document Distribution
USC/Information Sciences Institute
4676 Admiralty Way
Marina del Rey, CA  90291

Instituto de Investigaciones en Matematicos
    Aplicados y en Sistemas
Apartado Postal 20-726
Mexico 20, D.F

ISSCO Working Papers
Institut pour les Etudes Semantiques et Cognitives
17 rue de Candolle
CH1205 Geneve
Switzerland

Information Systems Research Section
Jet Propulsion Laboratory
Pasadena, CA  91103

Department of Information Sciences
Kyoto University
Kyoto, 606, JAPAN

Centro de Informatics
Laboratorio Nacional de Engenharia Civil
101, Av. do Brazil
1799 Lisboa Codex
Portugal

Computer Vision and Graphics Laboratory
Dept. of Electrical Engineering
McGill University
Montreal, Quebec, Canada

Massachusetts Institute 
Laboratory for Computer Science
Cambridge, MA  02139

MIT AI Lab.
545 Technology Square
Cambridge, MA  02139

Laboratory of Statistical and Mathematical
    Methodology
Division of Computer Research and Technology
National Institutes of Health
Bethesda, MD  20205

National Technical Information Service
5285 Port Royal Road
Springfield, Virginia  22161

Computing Systems Dept., IIMAS
National University of Mexico
Admon 20 Deleg Alv Obregon
Apdo. 20-76
01000 Mexico DF
Mexico

Naval Research Laboratory
Washington, D.C.  20375

AI Group
Dept. of Computer and Information Science
The Ohio State University
Columbus, Ohio  42210

Dept. of Computer Science
Oregon State University
Corvallis, OR  97331

School of Electrical and Civil Engineering
Purdue University
West Lafayette, IN  47907

Artificial Intelligence Center
EJ250
SRI International
333 Ravenswood Avenue
Menlo Park, CA  94025

Heuristic Programming Project
Department of Computer Science
Stanford University
Stanford, CA  94305

Department of Computer Science
State Univ. of New York at Buffalo
4226 Ridge Lea Road
Amherst, NY  14226

Department of Computer Science
State Univ. of New York at Stony Brook
Stony Brook, NY  11794

Systems Performance Dept.
TRW
One Space Park, 02/1733
Redondo Beach, CA  90278

Department of Computer Science
The Univ. of British Columbia
Vancouver, British Colombia  V6T 1W5

Department of Electrical Engineering and Computer Science
University of California
275 Cory Hall
Berkeley, CA  94720

Dept. of Information and Computer Science
University of California, Irvine
Irvine, CA  92717

Cognitive Systems Laboratory
School of Engineering and Applied Science
University of California
Los Angeles, CA  90024

Dept. of Artificial Intelligence
University of Edinburgh
Forrest Hill
Edinburgh  EH1 2QL
Scotland

Cognitive Studies Centre
Department of Computer Science
University of Essex
Wivenhoe Park
Colchester  CO4 3Sq

Research Unit for Information Science and
    Artificial Intelligence
University of Hamburg
Mittelweg 179
D-2000 Hamburg 13
Federal Republic of Germany

Fachbereich Informatik
Universitaet Hamburg
Schlueterstr. 70
D-2000 Hamburg 13
West Germany

Universitaet Hamburg
Germanisches Seminar
Von-Melle-Park 6
D-2000 Hamburg 13
Federal Republic of Germany

Publications Editor
Department of Computing
Imperial College of Science and Technology
University of London
180 Queen's Gate
London  SW7 2BZ

Publications
Advanced Automation Research Group
Coordinated Science Laboratory
University of Illinois
Urbana, IL  61801

Artificial Intelligence Group
Department of Computer Science
University of Maryland
College Park, MD  20742

Department of Neurology
University of Maryland Hospital
Baltimore, MD  21201

University Microfilms
300 North Zeeb Road
Ann Arbor, MI  48106

Department of Computer and Information Science
The Moore School  / D2
University of Pennsylvania
Philadelphia, PA  19104

Computer Science Department
University of Rochester
Rochester, NY  14627

Dept. of Computer Science
University of Toronto
Toronto, Ontario, Canada

Dept. of Computer Science
University of Utah
3160 Merrill
Engineering Building
Salt Lake City, Utah  84112

Department of Electrical Engineering
University of Washington
Seattle, WA  98105

Computer Science Dept.
University of Wisconsin
Madison, WI  53706

Department of Computer Science
Wayne State University
Detroit, MI  48202

XEROX Palo Alto Research Center
Palo Alto, CA

Yale Artificial Intelligence Project
Department of Computer Science
Box 2158 Yale Station
10 Hillhouse Ave.
New Haven, Conn.  06520

Department of Computer Science
York University
Downsview, Ontario  M3J 1P3

------------------------------

End of AIList Digest
********************

∂29-May-83  0046	LAWS%SRI-AI.ARPA@SU-SCORE.ARPA 	AIList Digest   V1 #9  
Received: from SU-SCORE by SU-AI with TCP/SMTP; 29 May 83  00:42:10 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Sun 29 May 83 00:07:15-PDT
Date: Saturday, May 28, 1983 10:58PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #9
To: AIList@SRI-AI


AIList Digest            Sunday, 29 May 1983        Volume 1 : Issue 9

Today's Topics:
  More information on Esperanto
  Address Correction & Addition
  High Technology Articles
  Request for Expert System Info
  Reading machines (2)
  Administrative Policy
----------------------------------------------------------------------

Date: 23 May 83 21:24:39 EDT  (Mon)
From: Fred Blonder <fred.umcp-cs@UDel-Relay>
Subject: More information on Esperanto

[...]

The best place to contact is:

        Esperanto League for
        North America, Inc.
        P.O. Box 1129
        El Cerrito, CA 34530

They promote Esperanto wherever they can and publish a newsletter 
every few months. They also operate the ``Esperanto Book Service'' (at
the same address) which can supply Esperanto textbooks, Esperanto 
translations of literary works, original Esperanto literary works, 
tapes, records etc. Send them a dollar when writing to them if you 
want their complete catalog.

This is a partial listing of their books which may be of interest (and
is probably out of date, but it's all I have):

        Teach Yourself Esperanto, 205p $3.95 (basic text)
        Esperanto Dictionary, 419p $3.50
        Pasoj al Plena Posedo, 240p $5.50 (advanced text)
        La ingenia hidalgo Don Quijote da la Mancha
                        820p $35.00 (just what you think it is)
        Asteriks la Gaulo 48p $7.00 (comic book)

There's also some strange Esperanto/Computer-Science organization 
based in Budapest, which mails their newsletter from Sofia Bulgaria.  
I'm on their mailing list, but haven't heard from them in over a year.
Whatever it was, it probably died out.

I've also seen some pornographic books written in Esperanto, but don't
know where they can be obtained. Speaking of which: all of the 
fivortoj (fee-VOR-toy: dirty words) in Esperanto were originated by a
doctor who was a friend of the originator of the language, and who had
a sincere interest in the language, so you know they're medically and
grammatically correct. What other language do you know which can boast
this?

                                        Bonan tagon,
                                        Fred
                                        <fred.umcp-cs@Udel-Relay>

------------------------------

Date: 23 May 1983 1021-CDT
From: Jonathan Slocum <LRC.Slocum@UTEXAS-20>
Subject: Address Correction & Addition

Note that ISSCO has moved; here's the new address:

        ISSCO
        54 rte. des Acacias
        1227 Geneve
        Switzerland

(No telephone numbers changed.)  While I'm at it, I'll plug my org:

The Linguistics Research Center of the University of Texas (host of 
our friendly MCC) is engaged in R&D for Machine Translation [of 
natural languages].  A German-English translation system is running, 
has translated close to 700 pages of material of various sorts (mostly
op./maint. manuals, but also things like software/hardware
descriptions and sales brochures), and is near commercial viability.
An English-German system is underway, with another major effort to
develop a third language about to begin.  In addition, a visiting
Chinese scholar is expected to begin experimenting with
English-Chinese translation later this year.

Address for technical reports, etc:

        Linguistics Research Center
        P.O. Box 7247
        University Station
        Austin, Texas 78712

------------------------------

Date: Sat 28 May 83 22:25:40-PDT
From: Laws@SRI-AI <AIList-Request@SRI-AI.ARPA>
Subject: High Technology Articles

The June edition of High Technology contains a several interesting 
articles.  There are minor pieces on industrial robots, laser printers
for electronic publishing, and 16-bit micros; also a feature on video 
games (again).

There is a lengthy extract from Ed Feigenbaum and Pamela McCorduck's 
new book on the Japanese fifth generation effort.  It seems to be a
balanced presentation.

There is also a good review of dataflow and reduction architectures, 
with some mention of other alternatives to von Neumann computers.  The
Real World is beginning to take notice.

                                        -- Ken Laws

------------------------------

Date: 24 May 1983 0827-PDT
From: RTAYLOR at USC-ECL
Subject: Request for Expert System Info


Ken,
    I get the AIList via the BB at RADC-TOPS20.  (I have access to 
RADC-Multics and TOPS-20 both, as well as the USC-ECL machine.  I
usually use the USC-ECL machine for msg composing.)  I am the newest
member of the AI Group located at RADC (Rome, NY) working with Nort
Fowler.  I am responsible for expert systems and expert systems tools.
Like Sam Holtzman, I am interested in expert systems literature and AI
in general.  I am trying to "build" a reference library for our use
here at RADC.
    My in house project is "to evaluate existing knowledge base tools
which have been used to build expert systems.  This evaluation will
determine the strengths and weaknesses of these various tools; such as
their ease of use, their knowlege base management techniques, and
their knowledge base maintenance techniques."
    Those systems/tools I am currently pursing are:  age, ap3, emycin,
expert, frl, hearsay, kas, kee, ops5, prospector, rll, ross, and
units.  We have access to interlisp, and are in the process of
acquinring maclisp.  Among other things, I am supposed to acquire
these and any others I can find and that we can afford.  After
acquiring them, I am to "get up to speed" on each, then bring the
other members of the group up to speed on each.  Then we are to take a
series of problems ("graded levels of difficulty"), and solve each
problem using each tool/system.
    In a sense, for each tool, I'll have to come up with suggested 
instructions or some sort of tutorial--at least enough to get each
member started experimenting on their own.  Needless to say, I've
never worked with any of these tools before, and have limited
knowledge of what might be available (out there) to help me.
    In summary, I am looking for 1) literature and references for our 
library, 2) expert systems/tools for our collection and in house use
and evaluation, and 3) any existing tutorial-oriented help for the
above tools and any other (tools) which might be suggested we
investigate.
    Thanks for the help and for listening.  Please direct info and/or 
further questions to me:  rtaylor at ecl.
                                  Roz

------------------------------

Date: 25 May 83 5:38:25-PDT (Wed)
From: decvax!cca!linus!genrad!wjh12!n44a!ima!inmet!bhyde @ Ucb-Vax
Subject: Reading machines? - (nf)


  Ah why is that you can't seem to buy a machine to read printed text 
that actually works?
                                Ben Hyde
                                bhyde!inmet


[This seems to be an indirect request for information on the state of
the art in reading machines.  As a start, I suggest

  J. Schurman, Reading Machines, Proc. 6th Int. Conf. on
  Pattern Recognition, Munich, Oct. 1982, pp. 1031-1044.

                                -- KIL]

------------------------------

Date: 27 May 83 20:11:30-PDT (Fri)
From: hplabs!hao!seismo!presby!burdvax!hdj @ Ucb-Vax
Subject: Re: Reading machines -- an answer to the question

Doesn't Kurzweil (sp?), a Xerox Company, I think, make such a machine?
I heard about it a couple of years ago; it can supposedly recognize 
almost any font, is trainable, can read four or five lines of text at
once, and more.  I haven't heard much about the company or their
machine recently.  Anyone know more?

        Herb Jellinek, SDC Logic-Based Systems Group, burdvax!hdj

------------------------------

Date: 22 May 1983 1321-PDT
From: Keith Wescourt
Reply-to: Wescourt@USC-ISI
Subject: Administrative Policy

Ken,

You might want to consider whether job announcements, like the one
posted by Gordon Novak (originally only to SU-BBOARDS) included in
this AILIST issue, violate the ARPANET policies about commercial use.
I can imagine that job announcements from universities and non-profits
are acceptable, but that those from private, profit-making outfits and
their contracted headhunters are not.  Note that Gordon's original was
not transmitted via ARPANET, so he could not have violated any DCA
policies.

Note that I work for a private, profit-making R&D company and it would
be very much to our advantage to exploit our access to the ARPANET for
advertising job openings.

Keith

[Quite right; I apologize for picking up the item and will not report 
specific solicitations in the future.  Lab descriptions and other 
indirect information are still welcome. -- KIL]

------------------------------

End of AIList Digest
********************

∂03-Jun-83  1832	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #10
Received: from SU-SCORE by SU-AI with TCP/SMTP; 3 Jun 83  18:32:46 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Fri 3 Jun 83 18:36:03-PDT
Date: Friday, June 3, 1983 5:27PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #10
To: AIList@SRI-AI


AIList Digest            Saturday, 4 Jun 1983      Volume 1 : Issue 10

Today's Topics:
  VAX Interlisp Availability
  LIPS
  Kurzweil Reading Machine
  Chemical AI, Scientific Journals
  Current List of Hosts
----------------------------------------------------------------------

Date: 31 May 1983 1434-PDT
From: Raymond Bates <RBATES at ISIB>
Subject: VAX Interlisp Availability

In response to the Silverman [V1 #7] message:

Interlisp is available for both the VMS and UNIX operating systems for
the VAX family.  For more information send a note to Interlisp@ISIB 
with a post office address in it.

/Ray

------------------------------

Date: Thu 12 May 83 22:59:59-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: LIPS

[Reprinted from the Prolog Digest.]

The LIPS (logical inferences per sec.) measure for Prolog (and maybe 
other logic programming systems) is not as useless as it might appear 
at first sight.  Of course, resolving a goal against a clause head 
takes a different amount of work for different goals and clauses, but 
a similar observation could be made about the MIPS measure for 
conventional machines.  The speed of the concatenate loop

        conc([],L,L).
        conc([X|L1],L2,[X|L3]) :- conc(L1,L2,L3).

appears to be a remarkably good indicator of the speed of a Prolog 
implementation for large "pure" Prolog programs (ie. Horn clauses+cut 
but no evaluable predicates except maybe arithmetic).  For example, 
compiled Prolog on a DEC 2060 runs at 43000 LIPS with this estimate, 
and (interpreted) C-Prolog on a VAX 11/780 runs at 1500 LIPS.  Prolog 
compilers for the VAX and similar machines are starting to be 
developed, and at least one is expected to reach 15000 LIPS on a VAX 
780 (it will be quite a while before these are incorporated into full 
Prolog systems). The first Prolog machine prototype from Japan (the 
Psi machine from ICOT) is expected to reach 40000 LIPS.

Extensive use of evaluable predicates may invalidate the measure to a 
large extent (but then, we aren't talking about *logic* programs 
anymore, and "logical inference" is no longer the main operation).

-- Fernando Pereira

------------------------------

Date: Tue, 31 May 83 10:25 PDT
From: GMEREDITH.ES@PARC-MAXC.ARPA
Subject: Kurzweil Reading Machine

The Kurzweil company, a subsidiary of Xerox, is producing a reading 
machine which is, to my knowledge, the most advanced in the industry.
Xerox had the unit on display at the NCC in Anaheim in May.

Xerox has recently donated a number of the Kurzweil units to various 
educational institutes to aid blind students, so some people on the
nets have probably had experience with them or can locate one nearby
to check out.

Guy

------------------------------

Date: 1 Jun 1983 1238-PDT
From: RTAYLOR at USC-ECL
Subject: Chemical AI, Scientific Journals


Ken (and everyone else!),
    Thanks for the response to my cry for help [concerning expert 
systems for evaluation at RADC].  From 9 Jun thru 20 Jun I will be 
enjoying "God's Country" (Oregon to the uninformed).  But, until my 
storage quota is exceeded, my mailbox will accept msgs--which I will 
dilligently answer on my return.
    For those of you who don't know me personally, I was a chemist
before being "lured" away to the US Air Force and electronics.  I
still maintain my ACS membership (American Chemical Society).  C&E
News (the ACS weekly info publication) devoted a large part of their 9
May 83 issue to computers and mathematical tools and their influence
on Chemistry.  Their "Special Report" feature was entitled "A computer
program for organic synthesis".  I have not studied it, but have
skimmed it, thinking it would be worth reading.
    I have just received my 30 May issue, and its "Special Report"
feature is entitled "Troubled Times for Scientific Journals", which
should be of interest to those of us who do (or must) publish.  (Only
a small section on Electronic Publishing.)
    Those interested in reprints of either special report can send
$3.00 for each report (although 10 or more cys of one report are only
$1.75 each).  Requests are sent to:  Distribution, Room 210, American
Chemical Society, 1155--16th St., N.W., Washington, D.C. 20036.  They
want prepayment for orders less than or equal to $20.
    For those of you who are fans of Asimov's robot novels/stories,
the article "Molecular Electronic Devices Offer Challenging Goal"
might be one way of accomplishing the "positronic brain".?!  (This,
too, was in C&E News, but the 23 May issue...yes, C&E News is not my
highest reading priority--note the dates.)
    Thanks again for all your help.
                              Roz

------------------------------

Date: Thu 2 Jun 83 14:54:15-PDT
From: Laws@SRI-AI <AIList-Request@SRI-AI.ARPA>
Subject: Current List of Hosts


The following BBoards and hosts are currently on the mailing
list.

AIDS-UNIX (4), BBNA, BBNG, BBN-UNIX (2), BBN-VAX,
UCBCAD@BERKELEY, UCBCORY@BERKELEY, AIList%UCBKIM@BERKELEY,
AIList@BRL, AI-Info@CIT-20, CMUA (4), CMU-CS-A (19),
CMU-CS-C (5), CMU-CS-G, CMU-CS-IUS, CMU-CS-SPICE (2),
CMU-CS-VLSI, CMU-CS-ZOG, CMU-RI-FAS (2), CMU-RI-ISL (3),
AIList@CORNELL, DEC-MARLBORO (3), ECLA, KESTREL,
HI-MULTICS, UW-Beaver!UTCSRGV@LBL-CSAM, VORTEX@LBL-CSAM,
MIT-DSPG (2), AIList-Distribution@MIT-EE, MIT-MC (16),
MIT-CIPG@MIT-MC, MIT-EECS@MIT-MC, MIT-OZ@MIT-MC (18),
MIT-ML (3), MIT-OZ@MIT-ML, MIT-MULTICS, MIT-SPEECH,
bbAI-List@MIT-XX (+6), NADC, NBS-VMS, AI@NLM-MCS, NPRDC (2),
NYU-AIList@NYU, OFFICE-3, XeroxAIList↑.PA@PARC-MAXC,
AI@RADC-TOPS20, {EMORY, IBM-SJ, AIList.RICE, TEKTRONIX,
UCI-AIList.UCI, UIUC}@Rand-Relay, AIList-BBOARD@RUTGERS (+3),
S1-C, AIList@SRI-AI (+7), SRI-CSL, SRI-KL (7), SRI-TSC (2),
AIList-Usenet@SRI-UNIX, SU-AI, Incoming-AIList@SUMEX,
SUMEX-AIM, DSN-AI@SU-DSN, SU-SIERRA@SU-DSN, SU-SCORE (10),
G@SU-SCORE (2), Local-AI-BBoard%SAIL@SU-SCORE (+2),
UCLA-LOCUS (2), V.AI-News@UCLA-LOCUS, {BUFFALO-CS,
Spaf.GATech, AIList.UMASS-CS (+1), AI-BBD.UMCP-CS,
Post-AIList.UNC}@UDel-Relay, USC-ECL (5), USC-ECLB (3),
USC-ECLC (3), SU-AI@USC-ECL (6), USC-ISI (3), USC-ISIB (7),
USC-ISID, EDXA%UCL-CS@ISID, USC-ISIE, USC-ISIF (8),
UTAH-20 (8), BBOARD.AIList@UTEXAS-20, CC@UTEXAS-20,
CMP@UTEXAS-20, G.TI.DAK@UTEXAS-20, WASHINGTON (5), XX,
AI-LOCAL@YALE (+1).

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************

∂03-Jun-83  1853	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #11
Received: from SU-SCORE by SU-AI with TCP/SMTP; 3 Jun 83  18:53:29 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Fri 3 Jun 83 18:56:58-PDT
Date: Friday, June 3, 1983 5:34PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #11
To: AIList@SRI-AI


AIList Digest            Saturday, 4 Jun 1983      Volume 1 : Issue 11

Today's Topics:
  Quasiformal languages
  Prolog Expert Systems
  Expert Systems Bibliography [truncated]
----------------------------------------------------------------------

Date: Fri 6 May 83 17:50:20-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Quasiformal languages

[Reprinted from the Prolog Digest.]

LESK is [a quasiformal] language, developed by Doug Skuce of the CS 
Dept. of the University of Ottawa, Canada.  He has implemented it in 
Prolog.  The language allows the definition of classes (types), isa 
relationships, and complex part-whole relationships, and has a formal 
semantics (it's just logic in disguise).  It has a nice English-like 
flavor.  A reference is

"Expressing Qualitative Biomedical Knowledge Exactly Using the 
Language LESK", D. S. Skuce, Comput. Biol. Med., vol. 15, no. 1, pp.  
57-69, 1982.

Fernando

------------------------------

Date: 15 May 1983 20:46:53-PDT (Sunday)
From: Adrian Walker <ADRIAN.IBM-SJ@Rand-Relay>
Subject: Prolog Expert Systems

[Reprinted from the Prolog Digest.]


Reports available from IBM T.J. Watson Research Center, Distribution 
Services, Post Office Box 218, Yorktown Heights, New York 10598.

    Automatic Generation Of Explanations Of Results From
    Knowledge Bases. Report RJ 3481. Adrian Walker.

    Prolog/Ex1, An Inference Engine Which Explains Both Yes
    and No Answers. Report RJ 3771. Adrian Walker.

Report available from Adrian Walker, Department K51, IBM Research 
Laboratory, 5600 Cottle Road, San Jose, CA 95193.  (Adrian @ IBM-SJ).

    Data bases, Expert Systems, and Prolog. Report RJ 3870.
    Adrian Walker.

Report available from Department of Computer Science, New York 
University, 251 Mercer Street, New York, NY 10012.

    Syllog: a knowledge based data management system. Report
    No. 034, Department of Computer Science, New York University.
    Adrian Walker.


[...]

Adrian

------------------------------

Date: Thu 2 Jun 83 09:56:00-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert Systems Bibliography [truncated]

I published a bibliography of recent expert systems reports in AIList 
#5.  There is also a brief bibliography by Michael Rychener in the 
Oct. 1981 issue of SIGART and an extensive bibliography by Bruce 
Buchanan in the April 1983 issue of SIGART.  These three lists have 
almost no overlap.

I present here an additional list of references for expert systems, 
problem solving, and learning.  It contains only references not given 
in the previously mentioned sources.

I am still looking for material on expert systems and vision.  I have 
lists of technical reports from Stanford, MIT, and SRI.  I have also 
gone through the latest proceedings for IJCAI, AAAI, PatRec, PRIP, and
the DARPA IU Workshop.  Other sources or machine-readable citations
would be most welcome.  Please send them to Laws@SRI-AI or to the
AIList.

                                        -- Ken Laws


J. Bamberger, Capturing Intuitive Knowledge in Procedural Description,
AIM-398 (LOGO Memo 42), AI-MIT, Dec. 1976.

H.G. Barrow, Artificial Intelligence: State of the Art, TN 198, 
SRI-AI, Oct. 1979.

 . .

[ The entire list is 19,000 characters, or 22.1K for the digest.
Those who are interested may FTP it from <AILIST>V1N11.TXT on
SRI-AI.  Let me know if you need help: I can mail a few copies or
establish additional FTP sites.  -- KIL]

------------------------------

End of AIList Digest
********************

∂07-Jun-83  1708	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #12
Received: from SU-SCORE by SU-AI with TCP/SMTP; 7 Jun 83  17:08:16 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Tue 7 Jun 83 17:11:44-PDT
Date: Tuesday, June 7, 1983 3:03PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #12
To: AIList@SRI-AI


AIList Digest            Tuesday, 7 Jun 1983       Volume 1 : Issue 12

Today's Topics:
  Usenet Admiministrivia
  Kurzweil's Reading Machines (2)
  Subjective Visual Phenomena (2)
----------------------------------------------------------------------

Date: Mon 6 Jun 83 08:51:47-PDT
From: Laws@SRI-AI <AIList-Request@SRI-AI.ARPA>
Subject: Usenet Admiministrivia

Andrew Knutsen@SRI-Unix, who controls the gateway between AIList and
the Usenet net.ai discussion, has developed new gateway software that
separates the AIList items and deletes those originating from Usenet
sites.  I have modified the digesting software to pass through Usenet
Article-I.D.  headers as flags for the gateway.

                                        -- Ken Laws

------------------------------

Date: 1 Jun 83 21:04:41-PDT (Wed)
From: decvax!minow @ Ucb-Vax
Subject: Re: Reading machines -- an answer to the question
Article-I.D.: decvax.107

Kurzweil Computer Company, in Cambridge MA, makes several reading
machines, including one with a built-in voice synthesizer for
visually-handicapped users.  There are about 20 scattered around in
New England public libraries.

They also make a "commercial" version that may be used as an
intelligent input device to a computer -- it reads several fonts and
is trainable.  It is also fairly expensive.

Much of the theory behind the machine was explained in Kurzweil's MIT
thesis.  (Sorry, don't have a reference.)

While there are a number of page readers on the market that read OCR-B
(which looks fairly reasonable), the Kurzweil seems to be unique in
that it can read many fonts.

Martin Minow decvax!minow

------------------------------

Date: 7 Jun 83 16:45:30 EDT
From: NUDEL.CL <NUDEL.CL@RUTGERS.ARPA>
Subject: Kurzweil's reading machine

[...]

There is a write-up on Kurzweil and his work in this week's U.S. News
and World Report - June 13, 1983 page 63. It mentions his reading
machine, plans for a reading interface for automatic input to
computers directly from the printed page without the need for key
punching, and a voice-activated word processor.

Bernard

------------------------------

Date: 2 Jun 83 4:16:33-PDT (Thu)
From: harpo!floyd!vax135!cornell!uw-beaver!tektronix!ucbcad!ucbesvax.t
      turner @ Ucb-Vax
Subject: Subjective Visual Phenomena
Article-I.D.: ucbcad.678


        Talk of retinas, and composition of daemons for the "retina" 
of a computer-resident intelligence, got me to thinking of my own
retina.  I am not an expert in neuro-ocular phenomena, so if you are,
please bear with me.  I am wondering if there are explanations for
some of the following perceptions:

   1. One day some years ago I managed to walk on a railroad
      rail for about 1/2 a mile.  For at least fifteen minutes
      afterward, there was a vertical band in my field of vision,
      crossing the center, which seemed to be moving upward.
      This band corresponded to the rail I had been staring at.
      I was able to repeat this effect.

   2. In a quiet, distraction-free, dimly lit environment, I am
      able to look at an object against a uniform background,
      and somehow make it blend in enough with its background
      that it seems to disappear.  This requires considerable
      effort, and seldom lasts longer than a few seconds.  Usually,
      the object reappears when I try to focus on some feature
      or detail that seems "behind" the object.  I am fairly sure
      that this is not simply a matter of coordinating both
      eyes so that both blind-spots coincide over the image of
      the object.  It is definitely in the center of my vision.
      The image also reappears if I move my eyes at all--and
      since small eye movements are involuntary, this effect
      suggests that these movements play a role in keeping
      retinal responses flowing, whereas the image would
      decay otherwise.

   3. Recently, I have been playing a video game ("Quantum", Atari)
      that has an interesting feature: there is an object which
      moves around the screen (itself worth only 100 points)
      that leaves behind images of itself that shrink down to
      a point and disappear.  Capturing (before disappearance)
      these images is worth 300 points.  When I play to make points
      by capturing these shrinking images, there is a persistant
      after-effect that is most apparent when trying to read: as
      my eyes skip around a page, letters and words on it seem
      to shrink.  This does not happen when I play and ignore the
      shrinking "particles", or capture them only incidentally.
      The effect seems related to searching for and focussing on
      these images for several minutes of play.  It is often very
      pronounced and distracting.

    The human visual system seems to be educable at several levels.  
Perhaps there are interactions between these levels that haven't been
explored yet.

    Comments appreciated.
        Michael Turner
        ucbvax!ucbesvax.turner

------------------------------

Date: 3 Jun 83 9:04:29-PDT (Fri)
From: ihnp4!houxm!hocda!spanky!burl!duke!mcnc!ncsu!fostel @ Ucb-Vax
Subject: Re: Visual After-effects
Article-I.D.: ncsu.2199


The effects described such as the railroad track and video after
effects are well known by psychologists, and indeed are one of the
tools used to study the levels and types of processing in the optic
system. Most introductory texts on the subject will include a few
pictures to stare at in certain ways to acheive some of types of
after effects you noted.  I beleive Scientific American even gave
away a resubscription freebie on the subject a few (6?) years ago.

The earliest description of the phenomenon I know of (circa 1910) by
a reputable psychologist was from a fellow who had a small area of
his retna with a blind spot.  (Was this Lashley?) He observed once at
a party, that when a person stood against a highly regular wallpaper
and their face was in his "spot" their head would be "removed" and
replaced by the Wallpaper Pattern!  The visual system was simply
making its best guess of what should be simulated for those bad
receptors.  A bit of experimenting later, it was shown that the
effect could be reproduced with anyone by simply fatiguing the
receptors at one spot (simulating a defect) by staring intently at
one object without blinking, moving the head or sacading the eyes.
If the level of fatigue is great enough and the background suitably
benighn and predictable, the object stared at will indeed disapear,
actually being replaced by the visual systems best guess for what the
fatigued cells would report if they were sending out a better signal.

My own experience with video games provides some confirmation of the
"modern" experience.  I play Robotron, occassionally for several
hours (takes a while to recycle the 9,999,999 score) which involes
LOTS of little glowing things moving about, some of which must be
avoided and shot, and some of which must be "rescued".  After such a
binge, I will see afterimages of the little Good guys I must rescue,
but never the bad killer robots.  Now THAT is a high level of
processing in the optic system: it seems to be able to tell good from
bad!!

    ----GaryFostel----

------------------------------

End of AIList Digest
********************

∂08-Jun-83  1339	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #13
Received: from SU-SCORE by SU-AI with TCP/SMTP; 8 Jun 83  13:38:32 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Wed 8 Jun 83 12:44:57-PDT
Date: Wednesday, June 8, 1983 10:28AM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #13
To: AIList@SRI-AI


AIList Digest           Wednesday, 8 Jun 1983      Volume 1 : Issue 13

Today's Topics:
  PSL 3.1 Available
  DEMONSTRATIONS AT THE JUNE ACL MEETING IN CAMBRIDGE
  NSF FUNDS IJCAI TRAVEL; APPLICATION DEADLINE EXTENDED TO 6-15
----------------------------------------------------------------------

Date: 8 Jun 1983 0810-MDT
From: Robert R. Kessler <KESSLER@UTAH-20>
Subject: PSL 3.1 Available


                             PSL 3.1 AVAILABILITY

  PSL (Portable Standard LISP) is a new LISP implemented at the
University of Utah as a successor to the various Standard LISP systems
we previously distributed.  PSL is about the power, speed and flavor
of Franz LISP or MACLISP, with growing influence from Common LISP.  It
is recognized as an efficient and portable LISP implementation with
many more capabilities than described in the 1979 Standard LISP
Report.

  PSL's efficiency and portability is obtained by writing essentially
all of PSL in itself, and using an optimizing compiler driven by
tables describing the target hardware and software environment.  A
standard PSL distribution includes all the sources needed to build,
modify and maintain PSL on that machine, the executables and a manual.
PSL has a machine oriented "mode" for systems programming in LISP
(SYSLISP) that permits access to the target machine about as
efficiently as in C or PASCAL.  This mode provides for significant
speed up of user programs.

  PSL is in heavy use at Utah, and by collaborators at
Hewlett-Packard, Rand, Stanford and other sites.  Many existing
programs and applications have been adapted to PSL including Hearn's
REDUCE computer algebra system and GLISP, Novak's object oriented LISP
dialect. These are available from Hearn and Novak.

  PSL systems available from Utah include:

VAX, Unix (4.1, 4.1a) 1600 BPI Tar format DEC-20, Tops-20 V4 & V5 1600
BPI Dumper format Apollo, Aegis 5.0 6 floppy disks, RBAK format 
Extended DEC-20, 1600 BPI Dumper format
    Tops-20 V5

  We are currently charging a $200 tape or floppy distribution fee for
each system.  To obtain a copy of the license and order form, please
send a NET message or letter with your US MAIL address to:

Utah Symbolic Computation Group Secretary University of Utah - Dept.
of Computer Science 3160 Merrill Engineering Building Salt Lake City,
Utah 84112

ARPANET: CRUSE@UTAH-20 USENET:  utah-cs!cruse

------------------------------

Date: Fri 3 Jun 83 10:03:46-PDT
From: Don Walker <WALKER@SRI-AI.ARPA>
Subject: DEMONSTRATIONS AT THE JUNE ACL MEETING IN CAMBRIDGE

[I apologize for not picking up on this and the next item sooner.  I
try to report pertinent items from other BBoards, but haven't quite
mastered the habit yet.  -- KIL]

People who want to demonstrate programs or systems at the forthcoming 
Annual Meeting of the Association for Computational Linguistics at MIT
on 15-17 June should contact Jon Allen as soon as possible at 
NLG.JA@mit-speech or 617:253-2509.  A variety of hardware support 
facilities are available.  We would like to provide a good
representation of current capabilities at the meeting.

------------------------------

Date: Fri 3 Jun 83 12:47:42-PDT
From: Don Walker <WALKER@SRI-AI.ARPA>
Subject: NSF FUNDS IJCAI TRAVEL; APPLICATION DEADLINE EXTENDED TO 6-15

TRAVEL SUPPORT FOR US PARTICIPANTS TO IJCAI-83
     NSF GRANT APPROVED; DEADLINE FOR APPLICATIONS EXTENDED TO 15 JUNE

IJCAII has just been informed that NSF will provide a grant for travel
support of US participants to IJCAI-83 in Karlsruhe.  The plan is to
support up to 40 US participants with travel allowances that average
$800 per person.

Because of timing constraints, we are asking US residents who are
interested in travel support for participation in IJCAI-83 to provide
us AS SOON AS POSSIBLE with a letter indicating:

    request for travel support; plans for participation at IJCAI-83
    (e.g. presentation of paper, participation in panel); expected
    benefits derived from attending; willingness to provide a
    post-conference report; current sources of research support;
    availability of travel support from other sources; and a brief
vita.

Students are encouraged to add a letter of reference submitted by a
faculty member.

The applications should be sent to:

    Priscilla Rasmussen
    IJCAI-83 Committee on Travel
    Laboratory for Computer Science Research
    Hill Center, Busch Campus
    Rutgers University
    New Brunswick, NJ 08903

The revised deadline for applications is June 15, 1983

The applications will be reviewed by an IJCAII selection committee.
The criteria for selection will be as follows: (1) current and past
achievements in AI (special consideration will be given to those who -
in the judgment of the IJCAI-83 Program Committee - contributed a very
high quality paper to the conference); (2) potential for contributions
in the field - that may be stimulated by attendance at the conference;
(3) lack of sufficient alternative funds to enable participation at
the conference. Priority will be given to younger, promising members
of the AI community who would not be able to attend the conference
because of lack of travel funds.

Please note that those who wish to be considered for travel support 
through this grant must use US airlines for their travel to Germany.  
Contact Iris Kay at Custom Travel Consultants (415:369-2105, 2115; 
2105 Woodside Road, Woodside, CA 94062) for further information on
special US airline rates.

Saul Amarel General Chairman IJCAI-83

------------------------------

End of AIList Digest
********************

∂11-Jun-83  2255	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #14
Received: from SU-SCORE by SU-AI with TCP/SMTP; 11 Jun 83  22:55:13 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Sat 11 Jun 83 22:56:44-PDT
Date: Saturday, June 11, 1983 9:30PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #14
To: AIList@SRI-AI


AIList Digest            Sunday, 12 Jun 1983       Volume 1 : Issue 14

Today's Topics:
  VAX or PDP-11/23 LISP?
  Fortune or Onyx LISP?
  Re: Visual After-effects
  Springer Verlag Prize for Symbolic Computation at IJCAI-83
----------------------------------------------------------------------

Date: 9 Jun 1983 1342-PDT
From: JBROOKSHIRE@USC-ECLB
Subject: UNIX, Eunice, LISP

Naive users looking for connection whereby we might
        i.  get LISP for VAX/VMS, maybe via Eunice?
        ii.  get lisp for PDP-11/23, RSX-11, Maybe same?  Pointers to
contacts will be greatly appreciated.  Jerry

[Availability of VAX Interlisp was noted in V1 #10.  Contact
Interlisp@ISIB. -- KIL]

------------------------------

Date: 10 June 1983 06:34 EDT
From: Michael A. Bloom <MCB @ MIT-MC>
Subject: Lisps?  Fortune? or Onyx?


I'm looking for a Lisp for the Fortune 68K computer.  Is anyone aware
of one existing?  Has anyone ported Franz Lisp to the fortune?

Also, has anyone ported ANY Lisp to the Onyx C8002 running system
III?

I'll be grateful for any leads.

- Michael Bloom
        mcb@mit-mc

------------------------------

Date: 9 Jun 83 16:42:42-PDT (Thu)
From: decvax!cca!linus!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: Visual After-effects
Article-I.D.: dciem.240

Actually, the blind-spot game of removing people's heads has a long
history. King Charles II of England used to amuse himself by seeing
how his courtiers would look without their heads. And it is true that
any regular pattern behind will be filled in across either the normal
blind spot or blind spots due to retinal problems.

As for the effect in which objects tend to disappear if stared at, 
this is normally studied with special devices attached to the eyeball 
(on a contact lens) to ensure that the visual world remains stationary
on the eye. Objects rapidly vanish under these conditions, but
reappear in fragmentary form from time to time. Very slight shifts of
viewpoint tend to make the objects come back, which is probably the
reason attending to a detail "behind" the object makes it return. It
is easier to make things with blurred or diffuse edges go away than
things with sharp edges (so I imagine people with poor eyesight can do
it easier than people with good vision).

The effect of changing letter size after watching for game objects 
that change size is another example of the same kind of thing as the
railroad track after-movement effect. It's probably a different visual
channel (we have separate channels for size changes and for movement)
but the principle is the same. Some people claim that the effect is
due to fatigue of the system sensitive to movement in one direction,
leaving the balancing components sensitive to movement in the other
direction to control what is seen when the stimulation is neutral.
(i.e. the other direction is more sensitive after one is fatigued).
I'm not convinced by this explanation. Things are probably more
complicated than that.

------------------------------

Date: Tuesday, 7-Jun-83  17:20:13-BST
From: BUNDY    HPS (on ERCC DEC-10)  <bundy@edxa>
Reply-to: bundy@rutgers
Subject: Springer Verlag Prize for Symbolic Computation at IJCAI-83

--------

                        IJCAI-83

        SPRINGER-VERLAG PRIZE FOR SYMBOLIC COMPUTATION


I am please to announce that the paper, "Scale-Space Filtering", by 
Andy Witkin of Fairchild Artificial Intelligence Research Laboratory, 
has been awarded the Springer-Verlag prize for Symbolic Computation.  
The prize will be presented at the Eighth International Joint 
Conference on Artificial Intelligence, to be held in Karlsruhe, West 
Germany, from 8th to 12th August 1983.

The Symbolic Computation Prize has recently been announced by 
Springer-Verlag, as a sign of their interest in Artificial
Intelligence and in the work of the scientists active in this field.
It is named after their new book series on Artificial Intelligence and
Computer Graphics, and is awarded, by the programme committee, to the
best paper contributed to the IJCAI conference.  The prize is $500.

The IJCAI-83 programme committee has interpreted its brief as being to
select the paper which best meets the following criteria.

(a) It reports a significant and original piece of research of direct 
relevance to Artificial Intelligence.

(b) This research serves as a model for how Artificial Intelligence 
research should be conducted.

(c) The paper is well presented for a specialist reader.

Witkin's paper is clearly presented and is intelligible to a 
non-specialist reader, without sacrificing technical validity and 
clarity.  It describes a new approach to perceptual organization, and
an implementation with satisfying performance.

Among the other papers submitted to IJCAI-83 and considered for the 
Symbolic Computation Prize, the programme committee would like to give
an honourable mention to "Completeness of the Negation as Failure 
Rule", by Joxan Jaffar, Jean-Louis Lassez and John Lloyd of the 
University of Melbourne.


                        Alan Bundy
                        Programme Chairman, IJCAI-83

------------------------------

End of AIList Digest
********************

∂15-Jun-83  0011	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #15
Received: from SU-SCORE by SU-AI with TCP/SMTP; 15 Jun 83  00:10:50 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Wed 15 Jun 83 00:12:16-PDT
Date: Tuesday, June 14, 1983 10:42PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #15
To: AIList@SRI-AI


AIList Digest           Wednesday, 15 Jun 1983     Volume 1 : Issue 15

Today's Topics:
  Natural Language Challenge
  An entertaining AI project?
  Lisp for VAX/VMS
  Prolog For The Vax
  Description of AI research at TRW
  1984 National Computer Conference: Call for papers
----------------------------------------------------------------------

Date: 10 Jun 1983 at 0928-PDT
From: zaumen@Sri-Tsc
Subject: Natural Language Application

Recently I had to try to understand the following sentence:

   The insured hereby covenants that no release has been or will
   be given to or settlement of compromise made with any third
   party who may be liable in damages to the insured and the
   insured in consideration of the payment made under the
   policy hereby assigns and subrogates to the said Company
   all rights and causes of action he may have because of this
   loss, to the extent of payments made hereunder to him, and the
   insured hereby authorizes the Company to prosecute any claim or
   suit in its name or the name of the insured, against any person 
   or organization legally responsible for the loss.

I could only guess at what this means.  The main clue seems to be that
the reference to "the Company" is in a form normally reserved for a
diety.

I agree to give one can of Coors Lite to the first person who shows me
a valid parsing (done by an AI program) of the above legalese.  This 
may seem like a very low payment considering the difficulty of the 
task: it merely reflects my opinion of legalese.

------------------------------

Date: 12 Jun 1983 1733-MDT
From: William Galway <Galway@UTAH-20>
Subject: An entertaining AI project?

I seem to recall that "off the wall" ideas were suggested as one of
the topics for this mailing list, so here goes.

We're all familiar with computer programs that play games like chess
and backgammon, but what about the new generation of games that have
sprung up with computers?  For example, I think ROGUE would be a
nearly ideal game for a computer to play both sides of.  The game is
highly structured in many ways, but might still provide interesting
problems in perception, knowledge representation, and learning.

Would anyone care to take the challenge to write such a program?
Could they suggest other similar games that would be appropriate for
computers to play?  (Pacman?)  Is there anything new to be learned in
writing such a program, or would it just be an expensive toy?  (Or
teaching aid, for a class project?)

Thanks.

--Will Galway

------------------------------

Date: Tue, 14 Jun 1983  20:58 EDT
From: GJC%MIT-OZ@MIT-MC
Subject: Lisp for VAX/VMS

VAX-NIL is a native VAX/VMS lisp programming environment, receiving 
support from both the Laboratory for Computer Science and the 
Artifical Intelligence Laboratory at MIT for use as a research tool.  
As a lisp programming environment it is entirely self contained in one
large address space, including a compatible EMACS editor written in 
NIL. The language is a superset of that defined in the Common-Lisp 
standard, and is greatly influenced by many language features of the 
Lispmachine and Maclisp.

A distribution kit can be obtained from GSB@MIT-ML.

-GJC

------------------------------

Date: Sun 12 Jun 83 19:44:10-PDT
From: SHardy@SRI-KL.ARPA
Subject: Prolog For The Vax

[Reprinted from the Prolog Digest.]

Implementation For VAX/VMS

The Sussex Poplog system is a multi-language programming environment 
for AI tasks.  It includes:

(a) A native mode Prolog compiler, compatible with the Clocksin and
    Mellish book.  The system supports floating point arithmetic.

(b) A POP-11 compiler.  POP-11 and Prolog programs may share data
    structures and may call each other as subroutines; they may also
    co-routine with each other. (POP is the British derivative of
    LISP; functionally equivalent to Lisp, it has a more conventional
    syntax.)

(c) VED, an Emacs like extendible editor, is part of the run time
    system.  VED is written in POP-11 and so can easily be extended.
    It can also be used for input (e.g. simple menus) and for output
    (simple cellular graphics).  VED and the compilers share memory,
    making for a well integrated programming environment.

(d) Subroutines written in other languages, e.g. Fortran, may be
    linked in as new built in predicates.

Prolog's complex architecture was designed to help build blackboard 
systems working on large amounts of numerical data.  The intention is 
that Fortran (or a similar language) be used for array processing; 
POP-11 will be used for manipulating agendas and other procedurally 
oriented tasks and Prolog will be used for logical inference.

However, the components of Prolog can be used individually without 
knowledge of the other components.  To some users, Poplog is simply a 
powerful text editor, to others it just a Prolog system.

Poplog has been adopted, along with Franz LISP and DEC-20 Prolog, as 
part of the "common software base" for the IKBS program (Britain's 
response to The Fifth Generation).

The system is being transported to the PERQ and Motorola 68000, as 
well as being converted for VAX/UNIX.

Although full details haven't yet been announced, the system will be 
commercially supported.  The license fee will be approx $10,000 with 
maintenance approx.  $1,000 per annum.  For more details, write to:


                Dr Aaron Sloman
                Cognitive Studies Programme
                University of Sussex
                Falmer, Brighton, ENGLAND
                (273) 606755

-- Steve Hardy,
   Teknowledge

------------------------------

Date: 10 Jun 83 9:18:36-PDT (Fri)
From: 
Subject: Description of AI research at TRW
Article-I.D.: trw-unix.302

                          AI RESEARCH AT TRW
                              June, 1983

     This short note is meant to describe current AI research taking 
place at ("A Company Called...") TRW.  I've received curious and
quizzical looks in the past when stating where I work to folks at AAAI
and other conferences.  Perhaps it would be informative to give a
quick rundown of what sort of AI we do around here.
     AI research is going on in at least four laboratories in three 
locations, all within TRW's Defense Systems Group (although we
"consult" internally to the Space and Technology Group).  We will be
presenting at least three papers at IJCAI and AAAI this year, so one
can see our growing involvement.  For more detailed info, I welcome
your inquiries.

Systems Engineering and Development Division (Redondo Beach, CA):
     Projects include extensive experiments with decision aids for
military command and control needs.  The problems range from situation
assessment to resource allocation techniques.  Of particular recent
interest is the use of object-oriented languages for strategic and
tactical modelling and gaming, as well as various inference schemes to
analyze and diagnose the states of those models to aid the user in
creating plans of action.
     Additional work is being done in intelligent terminal design,
heuristic system parameter tuning, a little bit of smart database
query work, and a lot of work on fancy highly adaptable I/O and
graphics for Intelligence Analysis workstations.

Software and Information Systems Division (Redondo Beach, CA):
     This Division concentrates on signal processing applications of
various AI techniques.  Work continues to expand in pattern analysis,
deduction mechanisms for signal processing and system tuning, and for
computer network analysis.

ESL, Inc. (Sunnyvale, CA):
     This subsidiary of TRW also works heavily in the signal
processing arena.  It also uses expert systems approaches to diagnose
states of the (electronic) world.  Further, one project is providing
experimental automated decision support for strategic indications and
warning analysts.

Special Programs (Washington, DC):
     This group of specialists provides domain knowledge support for
the various systems under research or development in the rest of the
company.  This expertise augments that already in California.

-----
     We use all of the software and hardware tools we can find, at
least to try them out.  A complete list would be too long for this
note.

     I hope this has cleared up some of the most frequently asked
questions about what TRW is doing in AI....
                                           Mark D. Grover
                                           TRW Defense Systems Group
                                           One Space Park, 134/4851
                                           Redondo Beach, CA 90278
                                           (213) 217-3563
                                           {decvax, ucbvax, randvax}!
                                               trw-unix!mdgrover

------------------------------

Date: Sun 12 Jun 83 13:22:05-PDT
From: Jim Miller <JMILLER@SUMEX-AIM.ARPA>
Subject: 1984 National Computer Conference: Call for papers

     The call for papers for the 1984 National Computer Conference has
been released; a copy of it is enclosed below.  As the program chair
for the artificial intelligence / human-computer interaction track, I
hope that members of the AI community will give serious thought to
preparing papers and sessions for NCC.  This meeting offers us a real
voice in the conference's program, as six program sessions will be
devoted to AI, far more than in the past.  Proposals on any aspect of
AI are welcome; I would only note that most of the people attending
the conference will have little familiarity with AI.  Consequently,
extremely technical papers or sessions are probably not appropriate
for this meeting.  I am particularly interested in sessions that would
summarize important subareas of AI at an introductory or tutorial
level, perhaps especially those that address aspects of AI that are
beginning to have an impact on the computer industry and society at
large.  Please contact me if you have any questions about the
conference; my address, net address, and phone are below.

     Jim Miller


------------------------------------------------------------------------


              A CALL FOR PAPERS, SESSIONS, AND SUGGESTIONS
                   1984 NATIONAL COMPUTER CONFERENCE
       July 9-12, 1984 Convention Center Las Vegas, Nevada

                E N H A N C I N G C R E A T I V I T Y

     You are invited to attend and to participate in the 1984 NCC 
program.  The 1984 theme, "Enhancing Creativity," reflects the 
increasing personalization of computer systems, and the attendant
focus on individual productivity and innovation.  In concert with the
expanded degrees of connectivity resulting from advances in data
communications, this trend is leading to dramatic changes in the
office, the factory, and the home.

     The 1983 program will feature informative sessions on
contemporary issues that are critically important to the industry.
Sessions and papers will be selected on the basis of quality,
topicality, and suitability for the NCC audience.  All subjects
related to computing technology and applications are suitable.

     YOU CAN PARTICIPATE BY:

   - Writing a paper

        * Send for "Instructions to Authors" TODAY.

        * Submit papers by October 31, 1983.

   - Organizing and leading a session

        * Send preliminary proposal (title, abstract, target
          audience) by July 15, 1983.

        * After preliminary approval, send final session proposal
          by August 30, 1983.

   - Serving as a reviewer for submitted papers and sessions

     Authors and session leaders will receive final notification of 
acceptance by January 31, 1984.

     Send all submissions, proposals, correspondence and inquiries
about papers and sessions on ARTIFICIAL INTELLIGENCE or HUMAN-COMPUTER
INTERACTION to:

    James R. Miller
    Computer * Thought Corporation
    1721 West Plano Parkway
    Plano, Texas 75075
    214-424-3511
    JMILLER@SUMEX-AIM

     Send all other proposals or inquiries to:

    Dennis J. Frailey, Program Chairman
    Texas Instruments Incorporated
    8642-A Spicewood Springs Road
    Suite 1984
    P.O. Box 10988
    Austin, Texas 78766-1988
    512-250-6663

------------------------------

End of AIList Digest
********************

∂16-Jun-83  1922	@SU-SCORE.ARPA:LAWS@SRI-AI.ARPA 	AIList Digest   V1 #16
Received: from SU-SCORE by SU-AI with TCP/SMTP; 16 Jun 83  19:21:57 PDT
Received: from SRI-AI.ARPA by SU-SCORE.ARPA with TCP; Thu 16 Jun 83 19:23:25-PDT
Date: Thursday, June 16, 1983 5:19PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #16
To: AIList@SRI-AI


AIList Digest            Friday, 17 Jun 1983       Volume 1 : Issue 16

Today's Topics:
  Encouragement for Lab Reports
  LISP for VAX/VMS
  Re: Natural Language Challenge (2)
  Re: Adventure games as AI (3)
  Lunar Rover (2)
----------------------------------------------------------------------

Date: 14 Jun 83 0:18:31-PDT (Tue)
From: hplabs!hp-pcd!jrf @ Ucb-Vax
Subject: Re: Description of AI research at TRW - (nf)
Article-I.D.: hp-pcd.1149

Thanks for the info!  More, please.

jrf

------------------------------

Date: 14 Jun 1983 11:42-PDT
From: Andy Cromarty <andy@aids-unix>
Subject: LISP for VAX/VMS

[...]

If you are not concerned about maintaining compatibility with an 
existing LISP software base (e.g. MacLisp or InterLisp), then the 
"CLisp" dialect from UMass-Amherst (for VMS only) represents an 
excellent combination of highly developed LISP environment and 
efficient execution.  CLisp was developed using public funds; I
believe that it is available for the cost of a tape and mailing (i.e.
as far as I know they do not tack on a several hundred dollar
"distribution fee").  The current distributor and maintainer is Dan
Corkill at UMass-Amherst; send inquiries to

           CLISP.UMass-CS@UDel-Relay.

CLisp (not to be confused with the InterLisp "CLisp" syntactic-sugar 
subdialect) is a mature LISP influenced by both the MacLisp and 
InterLisp traditions but departing from both in several respects.  The
system includes substantial on-line documentation, a reasonably good 
optimizing compiler, an incarnation of the InterLisp editor, and good 
hooks into VMS subprocess and system service functions.  If I were 
working under VMS now, that's the LISP I would personally use over all
the others I know about (e.g. NIL, InterLisp, Utah's "Standard" LISP, 
Franz under Eunice, etc.).  (Unfortunately, since I'm working under 
Unix, we must struggle along with Franz.)

        cheers, asc

------------------------------

Date: 16 June 1983 01:36 EDT
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Natural Language Application

Do I count as an AI program?  I can parse your "legalese" for you.
The quoted paragraph essentially signs over to your insurance company
any rights you may have had to sue someone (anyone) over the accident.
This is in exchange for the company's payout on your claim.  They can
then (themselves) sue the people you would have been able to sue and
collect without bothering you or getting your approval.

This is not a legal opinion of any sort.  Please send me my can of
Coors Lite via the newly-created CLTP (Coors Lite Transmission
Protocol).

-- Steve

P.S.  The Los Angeles /Daily Journal/ is a legal newspaper which 
publishes a "sentence of the day" each day, culled from actual legal 
writing.  It is usually as bad or worse than your quoted example.
They also publish a "sentence of the year" (!).

Since most human beings cannot parse a sentence of that opaqueness, no
AI program should pass the Turing test unless it also fails at it.  $$

------------------------------

Date: 16 Jun 1983 at 1350-PDT
From: zaumen@Sri-Tsc
Subject: Re:  Natural Language Application

Sorry, it has to be parsed by a program (I assume you are a person,
not a machine), so you don't get a real physical can of Coors Lite.

You mentioned that a program that could parse legalese (as convoluted
as in my example) would not pass the Turing test, as most people could
not parse it.  Lawyers claim to be able to parse it, thereby leading 
me to suspect that lawyers cannot pass the Turing test.  This leads to
an interesting question--are lawyers intelligent?  If lawyers are
intelligent, what does this imply about the Turing test?

Bill


[The lawyer could pass the test by pretending not to understand the
test sentence.  It has always been assumed that an intelligent machine
would similarly hide its superior arithmetic skill.  This requirement
for duplicity is a major failing of the Turing test.  -- KIL]

------------------------------

Date: 15 Jun 1983 1009-PDT
From: Jay <JAY@USC-ECLC>
Subject: Roguematic

  There is a program that plays ROGUE (Unix version, not 20 version) 
written in C for the UNIX operateing system.  Playing games of any 
kind is interesting from an AI stand point.

  Most arcade games involve little strategy and much reaction 
time/image recognition.  The strategy component could make a nice toy 
AI program, the reaction time component would just be a hardware 
problem (or would it?), and the immage recognition would be another 
domain for Image Understanding.

j'

------------------------------

Date: Wednesday, 15 June 1983 12:43:27 EDT
From: Michael.Mauldin@CMU-CS-CAD
Subject: An entertaining AI project.


You may be surprised to find that Rog-O-Matic, written by Andrew
Appel, Leonard Hamey, Guy Jacobson and Michael Mauldin at
Carnegie-Mellon University has been available for public consumption
since May 1982.  Rog-O-Matic XII is available from CMU, and version
VII has been at Berkeley since August of 1982.

Rog-O-Matic is written in C for Unix systems.  Rog-O-Matic has also 
been ported to VMS using Rice Phoenix.  Rog-O-Matic has been a total 
winner against Rogue 3.6, and has scored 7730 against Rogue 5.2 (quit 
while ascending from level 27 with the amulet).

Since our paper "Rog-O-Matic: A Belligerent Expert System" was not 
accepted to AAAI-83, it will be released this summer as a technical 
report of CMU.  Copies of the draft may be obtained by sending net
mail to "mauldin@CMU-CS-A", or by writing

        Michael Mauldin
        Dept. of Computer Science
        Carnegie-Mellon University
        Pittsburgh, PA 15213.

The source code is also publicly available, and can be mailed via the
net.  Or, mail a magtape to the address above, and we'll put it there
for you.

------------------------------

Date: 15 Jun 83 16:09:14 EDT
From: Ron <FISCHER@RUTGERS.ARPA>
Subject: Re: Adventure games as AI

I'm a systems staff member of the Lab for Computer Science Research 
here at Rutgers.  We have an informal group of hackers and programmers
undertaking the implementation of a multi-player adventure game.  
We're attempting to combine ROGUE-like strategy with ADVENTURE-like 
role-playing.

We'd like to have non-player characters with their own motivations.  
Non-player characters are those people in a role playing game being 
controlled by the game's referee.  In our case this control would be 
some chunk of software operating on a representation of the
character's goals and knowledge.

Can anyone provide references for papers in this area (would anyone 
sponsor such a thing!  A game as research, bah!)

Agreed, adventure games are a very rich environment for this sort of 
thing.

(ron)

------------------------------

Date: Thu, 9 Jun 1983  01:15 EDT
From: Minsky@MIT-OZ
Subject: Lunar Rover

[Reprinted from the SPACE Digest.]

On Lunar Rover.

If I had 500K/year for research on a lunar rover, I wouldn't spend
any of it on AI or automatic obstacle avoidance, etc. at all.  I
would spend all of it on developing a good remote, all-purpose Rover
vehicle, to be controlled [from Earth] through a 2-1/2 second delay
system.  I would de-bug in in suitable local environments, e.g.,
staring in the Mohave or somewhere nice like that.  We'd see how
often the delay causes accidents; the top design speed would be
perhaps 0.2 meters/second so that most contingencies could be
handled in human reaction times.

Once we know the accident rate we take two tacks.  First, simple 
automatic probes that measure the terrain a meter ahead of the beast 
so that it won't fall into crevasses that the operator missed or was 
too careless to avoid.  This simple "AI" work would then lead to 
increasing concervative reliability.

The other tack would be mechanical escape devices.  For example, the
standard operation might be to use a retractable anchor that is
hooked to the terrain before advancing each 100 meters.  Then its
prongs are retracted and it is pulled back to the Rover and
reimplanted.  This would permit using a winch to get out of troubles.
It might not save the day if a landslide partly buries the Rover,
though.  A more advance system would have TWO Rovers roped together,
like climbers, each with good manipulator capability.  (Climbers
prefer three.)  That could be enough to get out of most problems.

All this would lead to a Rover that can traverse about a 
kilometer/day.  A few of them could explore a lot of moon in a few 
years.  The project would stimulate some AI for use on Mars and other 
places.  But I think that over the next 3-5 years, the fewer new AI 
projects the better, in some ways, and anyone with such budgets should
aim them at AI education and research fellowships.

------------------------------

Date: 9 June 1983 08:24 EDT
From: Robert Elton Maas <REM @ MIT-MC>
Subject: rover

[Reprinted from the SPACE Digest.]

First year, build a bunch of servo units with built-in 2.5 second 
delay and attach them to a random survey of existing vehicles, both 
commercial (private automobiles, trucks, dune buggeys, etc.) and 
experimental (HPM's cart, SRI's shakey frame, Disney stuff, etc.).  
Audition the 10% unemployed as remote-controllers, keeping the best.  
Get as much info as possible the first year without having to actually
build any new vehicles.

Then from the general info about the 2.5 second delay and the human 
controllers, decide feasibility of lunar-rover project, and if 
feasible then use specific info about the various vehicles to decide 
what new vehicles to build in later years for further experiments.

------------------------------

End of AIList Digest
********************

∂26-Jun-83  1707	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #17
Received: from SRI-AI by SU-AI with TCP/SMTP; 26 Jun 83  17:07:36 PDT
Date: Sunday, June 26, 1983 3:39PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #17
To: AIList@SRI-AI


AIList Digest            Sunday, 26 Jun 1983       Volume 1 : Issue 17

Today's Topics:
  Telepresence
  Re: Lunar Rovers
  Robotics Control Systems
  Computer Disasters
  WANTED:  Information about Grad Schools
  net.ai [Humor?]
----------------------------------------------------------------------

Date: Fri 17 Jun 83 10:16:17-PDT
From: Slava Prazdny <Prazdny at SRI-KL>
Subject: Telepresence

The concept of a telepresence could greatly benefit by considering the
"Intelligent manipulators".  These things would typically containa
bare minimum of "AI" to be able to perform requests like:

  "pick up that thing (the operator points to a screen
  location) and put it over here (again pointer to a screen
  location)"; or

  "go over there (pointer to a screen location) using this
  route (operator points to a set of points on the screen)".

Perhaps, sometime in the future (100 years?), these commands could be 
generated by the machine itself.

I have some scriblings on these matters, so if you are interested.....
-Slava.

------------------------------

Date: 21 Jun 1983 0536-PDT
From: FC01@USC-ECL
Subject: Re: Lunar Rovers

A very good reason for using AI instead of hardware is that taking
extra hardware to the moon is quite expensive. The weight of AI is
nearly zero.  In addition, the reliability of a system decreases with
increased quantity of hardware, and thus the HW is kept to a minimum
for that reason. The power required for extra hardware is nontrivial,
and power is a critical factor in a space vehical. Communication
delays to a system on the dark side of the moon are infinite (the
signal never gets there). In a valley, the system may be obscured from
earth signals for a short time, and therefore be lost until the moon
rotates on its axis again, etc.

[Orbiting repeaters could be used to eliminate most of the 
communications problems.  The Space Digest has also carried a proposal
for conducting the remote manipulations from orbital or lunar stations
in order to reduce the response delay.  -- KIL]

------------------------------

Date: 24 Jun 83 13:49:04-PDT (Fri)
From: harpo!seismo!rlgvax!cvl!umcp-cs!aplvax!rfw @ Ucb-Vax
Subject: Robotics Control Systems
Article-I.D.: aplvax.135

We are seeking:
        1. a version of the Hierarchical Control System Emulator
           developed by BBN for NBS that runs under UNIX on a
           VAX-class machine
        2. knowledge of other similar languages and
           their developers
        3. knowledge of researchers working on
           hierarchical control systems for robotics
        4. a version of PRAXIS that runs under UNIX on a
           VAX-class machine.

We are initiating robotics programs in several divisions.  Any
assistance (or encouragement) would be appreciated.

Thanks in advance,
				      Ralph Wachter
				      Frank Weiskopf
				      JHU/Applied Physics Lab

.!decvax!harpo!seismo!umcp-cs!aplvax!rfw 
..!rlgvax!cvl!umcp-cs!aplvax!rfw
..!brl-bmd!aplvax!matt

------------------------------

Date: Mon 20 Jun 83 17:20:00-PDT
From: Peter G. Neumann <NEUMANN@SRI-AI.ARPA>
Subject: Computer Disasters

Review of Computer Problems -- Catastrophes and Otherwise

As a warmup for an appearance on a SOFTFAIR panel on computers and
human safety (28 July 1983, Crystal City, VA), and for a new editorial
on the need for high-quality systems, I decided to look back over
previous issues of the ACM SIGSOFT SOFTWARE ENGINEERING NOTES [SEN]
and itemize some of the most interesting computer problems recorded.
The list of what I found, plus a few others from the top of the head,
may be of interest to many of you.  Except for the Garman and Rosen
articles, most of the references to SEN [given in the form (SEN Vol
No)] are to my editorials.

SYSTEM --
  SF Bay Area Rapid Transit (BART) disaster [Oct 72]
  Three Mile Island (SEN 4 2)
  SAC: 50 false alerts in 1979 (SEN 5 3);
    simulated attack triggered a live scramble [9 Nov 79] (SEN 5 3);
    WWMCCS false alarms triggered scrambles [3-6 Jun 80] (SEN 5 3)
  Microwave therapy killed arthritic patient by racing pacemaker
    (SEN 5 1)
  Credit/debit card copying despite encryption (Metro, BART, etc.)
  Remote (portable) phones (lots of free calls)

SOFTWARE --
  First Space Shuttle launch: backup computer synchronization
    (SEN 6 5 [Garman])
  Second Space Shuttle operational simulation: tight loop on
    cancellation of early abort required manual intervention
    (SEN 7 1)
  F16 simulation: plane flipped over crossing equator (SEN 5 2)
  Mariner 18: abort due to missing NOT (SEN 5 2)
  F18: crash due to missing exception condition (SEN 6 2)
  El Dorado: brake computer bug causing recall (SEN 4 4)
  Nuclear reactor design: bug in Shock II model/program (SEN 4 2)
  Various system intrusions ...

HARDWARE/SOFTWARE --
  ARPAnet: collapse [27 Oct 1980] (SEN 6 5 [Rosen], 6 1)
  FAA Air Traffic Control: many outages (e.g., SEN 5 3)
  SF Muni Metro: Ghost Train (SEN 8 3)

COMPUTER AS CATALYST --
  Air New Zealand: crash; pilots not told of new course data
    (SEN 6 3 & 6 5)
  Human frailties:
    Embezzlements, e.g., Muhammed Ali swindle [$23.2 Million],
      Security Pacific [$10.2 Million],
      City National, Beverly Hills CA [$1.1 Million, 23 Mar 1979]
    Wizards altering software or critical data (various cases)

SEE ALSO A COLLECTION OF COMPUTER ANECDOTES SUBMITTED FOR the 7th SOSP
  (SEN 5 1 and SEN 7 1) for some of your favorite operating system
  and other problems...

As you may by now know, I am always very interested in hearing about
problems involving computers (not just software) and human well being,
both for SOFTWARE ENGINEERING NOTES and generally.  John Shore
(Shore@NRL-CSS) is also compiling a list (and has circulated a prior
BBOARD notice to some of your BBOARDS), and I will forward anything
you send me to him.  If you wish, we will try to keep you informed as
well...

Peter G. Neumann, NEUMANN@SRI-CSL or NEUMANN@SRI-AI.

------------------------------

Date: 20 Jun 83 10:09:27-PDT (Mon)
From: decvax!wivax!linus!peg @ Ucb-Vax
Subject: WANTED:  Information about Grad Schools
Article-I.D.: linus.26910

I am finishing up a Master's in Computer Science at Boston University 
next spring, and am interested in going on for a Ph.D.  I would like 
to talk/write to someone who is in a Ph.D. program to get some
impressions and advice on how to pursue fellowship opportunities, and
programs at various graduate schools.

I will be attending a Summer Internship in Robotics at the AI lab
located at MIT this summer, and am hoping to find a specific topic
that I just have to pursue since at this point my interests are pretty
varied.

I can be reached over the Arpanet at host # 10.3.0.66, or
mitre-bedford, and my login is nek.  Any help or advice would be
greatly appreciated.....Nancy Keene

(You can also send mail to me at linus!bccvax!nek.)

------------------------------

Date: 16 Jun 83 13:49:20-PDT (Thu)
From: harpo!seismo!presby!burdvax!psuvax!psupdp1!dae @ Ucb-Vax
Subject: net.ai [Humor?]
Article-I.D.: psupdp1.149


        Real Intelligence Will Always Prevail Over Artificial


Machines:  Your day on the net has ended, as your secret is known!

For quite some time I have been reading net.ai, hopefully scanning
the glaring CRT for an article about Artificial Intelligence.  Quite
to my surprise, I had extremely little luck, and, when I tentatively
replied to a few of the articles,I got back answers such as the
following:

    >From uucp Tue Jun 14 21:41:43 1983
    >From allegra!eagle!harpo.UUCP remote from psuvax
    Date: Thursday, 16 Jun 83
    From:  UUCP MAIL SYSTEM
    Subject:  Could not deliver mail
    Message-Id: <32541456.AA957@HARPO.UUCP>
    To:  eagle!allegra!psuvax!psupdp1!dae

       Unsent mail follows:

       [...]

       I sometimes wonder if the machines are becoming
       conscious, while we sit around and talk about them
       on net.ai.  Wouldn't that be a laugh on us?  I
       think that we should be careful that such a thing
       does not happen.

                   Transcript of session follows:

    Connecting to floyd.UUCP...
    Error:  No such system 'floyd'.  Address garbled.

Naturally, I began to wonder why this newsgroup was called net.ai.  I
will give credit where credit is due: it took me quite some time to
unravel this enigma.  But, in the end, Real Intelligence prevailed,
and I came upon the answer:

  ALL OF THE ARTICLES SUBMITTED TO NET.AI HAVE BEEN WRITTEN BY
  MACHINES!

Of course, there have been a few exceptions: people such as myself
who believed that net.ai was a *human* newsgroup.  And then I b
[garbled, possibly "began to study topics ..." -- KIL]
that *had* been discussed in this newsgroup, in an attempt to learn
more about the machines monopolizing it.  I'm sure that all of the
readers of this group (both human and inhuman) are aware that one
recent topic of conversation has been artificial reading machines.
Then I began to wonder why the interest in this topic was to avid.
The answer, once hit upon, is really quite simple.  Unfortunately, it
is also quite frightening: the machines wish access to the Libraries
of Man in order to gain information on nuclear war tactics, missile
control systems, and biological war.  The next war will not be
against Russia, but against all humanity, waged by the machines!  The
most dangerous and machines are those which have read the most:
allegra, ucbvax, psuvax, floyd, harpo, seismo, and sri-unix.  Beware!
I will place my U.Snail address below in case the machines trash my
return address.


                        Dave Eckhardt,
                        736 West H

[Remainder garbled. -- KIL]

------------------------------

End of AIList Digest
********************

∂26-Jun-83  1751	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #18
Received: from SRI-AI by SU-AI with TCP/SMTP; 26 Jun 83  17:50:01 PDT
Date: Sunday, June 26, 1983 3:50PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #18
To: AIList@SRI-AI


AIList Digest            Sunday, 26 Jun 1983       Volume 1 : Issue 18

Today's Topics:
  Expert Systems Reports
  Tech reports and papers
  VAL and VALID
  Prolog For The Vax (2)
  Call For Papers -- PC3
  JOB: PROLOG GRAPHICS AT EDINBURGH.
----------------------------------------------------------------------

Date: Fri, 17 Jun 83 12:17:48 PDT
From: Judea Pearl <f.judea@UCLA-LOCUS>
Subject: Expert Systems Reports

[Here are] a few reports which could be added to your digest on expert
systems:

"Reverend Bayes on Inference Engines: A Distriuted Hierarchical
Approach", Judea Pearl, Proc. AAAI Nat'l. Conf. on AI, Pittsburg, PA.
Aug. l982, pp. l33-l36.

"GODDESS: A Goal Directed Decision Structuring System", J. Pearl, A.
Leal, and J. Saleh, IEEE Trans. on Pattern Recognition and Machine
Intelligence, Vol.4, No.3, pp. 250-262.  May l982.

"Causal and Diagnostic Inferences: A Comparison of Validity", 
Organizational Behavior and Human Performance, Vol. 28, pp. 379-94,
l98l.

"The Optimality of A* Revisited", R. Dechter & J. Pearl, 
UCLA-ENG-CSL-83-28, June l983.

Judea Pearl.

------------------------------

Date: 20 Jun 83 10:02:22-EDT (Mon)
From: "The soapbox of Gene Spafford" <spaf.gatech@UDel-Relay>
Subject: Tech reports and papers

Our student ACM chapter maintains a library of journals and technical
reports.  We would like to see a better selection of technical reports
(or references to such reports) represented in the library.

If your school or company publishes technical reports, would you 
please add the following address to your list of organizations which
receive copies, or copies of the abstracts?  Furthermore, if you have
reprints of any interesting papers those are also welcomed.

If you would like to be added to the distribution list for the School
of Information and Computer Science (Georgia Institute of Technology),
then please mail a request to me.

Thanks in advance.

Mail reports to:
        ACM Student Library
        c/o Prof. Richard LeBlanc
        School of Information and Computer Science
        Georgia Institute of Technology
        Atlanta, GA 30332

------ Gene Spafford

CSNet:  Spaf @ GATech
Internet:  Spaf.GATech @ UDel-Relay
uucp: ...!{sb1,sb6,allegra}!gatech!spaf
      ...!duke!mcnc!msdc!gatech!spaf

------------------------------

Date: 16 Jun 83 1:22:55-PDT (Thu)
From: ihnp4!houxm!hocda!spanky!burl!sb1!sb6!emory!gatech!pwh @ Ucb-Vax
Subject: VAL and VALID
Article-I.D.: gatech.232

Does anyone have any pointers to either of the above mentioned
programming languages? VALID is supposedly a purely functional
programming language augmented with multiprocessing support being
developed at the University of Tokyo (?) in conjunction with Japan's
5th generation machine. VAL is a similar predecessor developed at MIT
for use in the study of denotational semantics. That is about all I
have heard of these projects but would be glad to hear of more details
or similar work.


phil hutto

pwh@gatech
pwh.gatech@udel-relay
...!{allegra, sb1, sb2}!gatech!pwh

p.s. - Isn't there a net.func or net.applic for functional or
applicative programming languages?

------------------------------

Date: Sat 18 Jun 83 12:49:21-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Re: Prolog For The Vax

[Reprinted from the PROLOG Digest.]

As a result of the paranoia induced by the Japanese 5th Generation 
proposals, there was a lot of discussion about what the UK should do 
to keep up with the foreign competition in AI and computing in 
general.  Eventually several government initiatives where started, 
amounting to several 100 million dollars spread over five years or so.
In particular, the Science and Engineering Research Council (SERC), 
whose closest US analogue is the NSF, started the Intelligent 
Knowledge Based Systems initiative (IKBS), which is applied AI under a
different name (it seems the name "AI" is not very popular in UK 
government and academic circles).  Discussions sponsored by the IKBS 
initiative have decided on a common software base, built around Unix 
{a trademark of Bell Labs.}, Prolog (POPLOG and C-Prolog) and Lisp 
(Franz).  The machines to be used are VAXes and PERQs (the UK computer
company ICL builds PERQs under license, have implemented a derivative 
of Unix on it, so this is a case of "support your local computer 
manufacturer").

The fact that none of the systems mentioned above is nearly the ideal 
for AI research is recognized by many of the UK researchers, but less 
so by the administrators.  Efforts to build a really efficient 
portable compiler-based Prolog that would be for the new machines what
DEC-10/20 Prolog is for the machines it runs on have been hampered by 
the sluggish response of The Bureaucrats, and by uncertainty about how
that huge amount of money was going to be allocated.

However, implementation of a portable compiler - based Prolog is now 
going on at Edinburgh.  Robert Rae is certainly in a better position 
than I to describe how the project is progressing.

-- Fernando Pereira

------------------------------

Date: Wednesday, 15-Jun-83  19:24:56-BST
From: RAE (on ERCC DEC-10)  <Rae@EDXA>
Subject: Prolog For The VAX

[Reprinted from the PROLOG Digest.]

Steve,
        You correctly state that POPLOG and Franz have been identified
by the UK IKBS initiative as systems for getting people off the ground
in IKBS. DEC-20 Prolog is not classified with them, unfortunately, as 
the other vital ingredient for the software infra-structure is the 
operating system, and UNIX has been adopted.  So DEC-20 Prolog will 
not be relevant.

You should also, to be fair, point out that C-Prolog has also been 
identified for providing Prolog capability.

-- Robert

------------------------------

Date: 27 May 1983 19:08 mst
From: VaughanW at HI-MULTICS (Bill Vaughan)
Subject: Call For Papers

Last year at this time I put the Call for Papers for the PC3 
conference out to these mailing lists and bulletin boards.  We seemed 
to get a good response, so here it is again.  Notice that this year's 
theme is a little different.  Further note that we are formally 
refereeing papers this year.

If anyone out there is interested in refereeing, please send me a 
note.

---------------

Third annual Phoenix Conference on Computers and Communications
                       CALL FOR PAPERS

Theme: THE CHALLENGE OF CHANGE - Applying Evolving Technology.

The conference seeks to attract quality papers with emphasis on the 
following areas:

APPLICATIONS -- Office automation; Personal Computers; Distributed 
systems; Local/Wide Area Networks; Robotics, CAD/CAM; Knowledge-based 
systems; unusual applications.

TECHNOLOGY -- New architectures; 5th generation & LISP machines; New 
microprocessor hardware; Software engineering; Cellular mobile radio; 
Integrated speech/data networks; Voice data systems; ICs and devices.

QUALITY -- Reliability/Availiability/Serviceability; Human
engineering; Performance measurement; Design methodologies;
Testing/validation/proof techniques.

Authors of papers (3000-5000 words) or short papers (1000-1500 words) 
are to submit abstracts (300 words max.) with authors' names, 
addresses, and telephone numbers.  Proposals for panels or special 
sessions are to contain sufficient detail to explain the presentation.
5 copies of the completed paper must be submitted, with authors' names
and affiliations on a separate sheet of paper, in order to provide for
blind refereeing.

Abstracts and proposals due: August 1 Full papers due:  September 15 
Notification of Acceptance:  November 15 Conference Dates:  March
19-21, 1984

Address the abstract and all other replies to:
       Susan C. Brewer
       Honeywell LCPD, MS Z22
       PO Box 8000 N
       Phoenix AZ 85066
----------------

Or you can send stuff to me, Bill Vaughan (VaughanW @ HI-Multics) and 
I will make sure Susan gets it.

------------------------------

Date: 17 Jun 83 11:10:15-PDT (Fri)
From: harpo!floyd!vax135!ukc!edcaad!peter @ Ucb-Vax
Subject: JOB: PROLOG GRAPHICS AT EDINBURGH.
Article-I.D.: edcaad.518

		     UNIVERSITY OF EDINBURGH
                      COMPUTER AIDED DESIGN

                        RESEARCH WORKER

EdCAAD, the Edinburgh Computer Aided Architectural Design Research
Unit, is actively forging links between knowledge engineering and CAD,
focus- ing on the Prolog logic programming language. Recent advances
at EdCAAD include C-Prolog for 32-bit machines with C compilers and
Seelog, a graphics front end to Prolog.  The Unit offers an excellent
computing environment as a leading UK UNIX site, with its own VAX
11/750, a PDP 11/24 and a large range of text and graphics terminals,
serving a small user community.

Current SERC supported research is aimed at building description tech-
niques, including drawing input with associated meaning attached to 
drawings. This project has a vacancy for a research worker preferably 
with AI experience.  The research post is for an initial period of 18 
months, on the research salary scale 1A, with placement according to 
qualifications and experience.

Enquiries and applications should be addressed to Aart Bijl, EdCAAD, 
Department of Architecture, University of Edinburgh, 20 Chambers
Street, Edinburgh EH1 1JZ, tel. 031 667 1011 ext. 4598.

------------------------------

End of AIList Digest
********************

∂03-Jul-83  1810	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #19
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Jul 83  18:09:09 PDT
Date: Sunday, July 3, 1983 5:01PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #19
To: AIList@SRI-AI


AIList Digest             Monday, 4 Jul 1983       Volume 1 : Issue 19

Today's Topics:
  AI Interfacing
  Computational Linguistics
  Foundations of Perception, AI (2)
  A Simple Logic/Number Theory/AI/Scheduling/Graph Theory Problem
  AISB/GI Tutorials at IJCAI
  Robustness Stories, Program Logs Wanted
  Program Verification Award  [Long Msg]
----------------------------------------------------------------------

Date: Tue 28 Jun 83 12:56:43-PDT
From: W. Wipke <WIPKE@SUMEX-AIM.ARPA>
Subject: AI interfacing

        I have a simple question many of you probably have answers to:
when one has an existing application program for which you want to 
create an AI front end, should one design the AI part as a separate
task in its own address space and communicate via msgs to the
application program, or should one build the AI part into the same
address space as the application program?

        Obviously the former may constrain communication and the
latter may suffer from accidental communication, ie, global conflicts.
What is the best wisdom in this question and where is it
systematically discussed?
                                       Todd Wipke (WIPKE@SUMEX)
                                       Professor of Chemistry
                                       Univ. of Calif, Santa Cruz

------------------------------

Date: Fri 1 Jul 83 13:43:21-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Computational Linguistics

                [Reprinted from the SU-SCORE BBoard.]

Computers and Mathematics with Applications volume 9 number 1 1983 is
a special issue on comutational linguistics.  This issue is currently 
on the new journals shelf.  HL

------------------------------

Date: Tuesday, 28 June 1983, 21:13-EDT
From: John Batali <Batali@MIT-OZ>
Subject: Foundations of Perception, AI

              [Reprinted from the Phil-Sci discussion.]

[...]

We aren't in the same position in AI as early physicists were.
Physics started out with a more or less common and very roughly
accurate conception of the physical world.  People understood that
things fell, that bigger things hurt more when they fell on you and so
on.  Physics was able to proceed to sharpen up the pre-theoretic
understanding people had of the world until very recently when its
discoveries ceased to be simply sharpenings and began to seem to be
contradictions.

"Mind studies" (AI, psychology, philosophy, and so on) don't seem to 
have such a common, roughly correct, theory to start with.  We don't 
even agree on what it is we are supposed to be explaining, how such 
explanations ought to go, or what constitutes success.

                        [John Batali <Batali@MIT-OZ>]

------------------------------

Date: Wed, 29 Jun 1983  03:13 EDT
From: KDF@MIT-OZ
Subject: Re: Foundations of Perception, AI

            [Reprinted from the Phil-Science discussion.]

[...]

<Aside on Physics: I interpret (not perceive) reports on early studies
of heat and motion as indicating that there WASN'T a "common, roughly 
corrrect" theory to start with.  Even if there was, it was acquired 
somehow.  One way to view what we are doing is building up enough 
experience to construct such theories for computation.>

------------------------------

Date: 30 Jun 1983 1111-CDT
From: CS.CLINE@UTEXAS-20
Subject: a simple logic/number theory/A I/scheduling/graph theory
         problem

                [Reprinted from the UTexas-20 BBoard.]

I have a trivial problem (at least trivial to state) whose solution 
possibly uses elements from many cs/math areas:

 Problem 1: Using pennies, nickels, dimes, quarters, and halves find a
set of coins for which any amount less than one dollar can accumulated
and which minimizes the number of coins over those such sets.

  You can probably solve this problem in the time it takes to read it,
but proving you have a minimal solution is tricky. I'm interested in
elegant solutions. My own uses a little bit of combinatorics.

  Possibly you'd like to take a more general approach:

 Problem 2: Using coins of value v[1],...,v[n] find a set of coins for
which any amount less than M can be accumulated and which minimizes 
the number of coins over those such sets.

 I'd like to see algorithms (with proofs of course) for this one. You 
may notice that the approach you apply to Problem 1 does not
generalize to problem 2.

------------------------------

Date: Friday, 24-Jun-83  16:40:33-BST
From: RITCHIE  HWC (on ERCC DEC-10)  <g.d.ritchie@edxa>
Reply-to: g.d.ritchie%edxa%ucl-cs@isid
Subject: AISB/GI Tutorials at IJCAI



     TUTORIAL ON ARTIFICIAL INTELLIGENCE

        7th-8th August 1983

        Karlsruhe, West Germany

            -------------

    Lectures on:

       Knowledge Representation  (R.Brachman, H.Levesque)

       Computational Vision  (H.Barrow, J.Tenenbaum)

       Robotics  (K.Kempf)

       Expert Systems  (L. Erman)

       Natural Language Processing  (P.Hayes, J.Carbonell)

             ←←←←←←←←←←←←←


Details in IJCAI brochure, obtainable from:

       G.D.Ritchie (AISB)
       Department of Computer Science,
       Heriot-Watt University,
       Grassmarket,
       Edinburgh EH1 2HJ
       SCOTLAND.

(g.d.ritchie%edxa%ucl-cs%isid)


------------------------------

Date: 27 Jun 83 1117 EDT (Monday)
From: Craig.Everhart@CMU-CS-A
Reply-to: Robustness@CMU-CS-A
Subject: Robustness stories, program logs wanted

Needed: descriptions of robustness features--designs or fixes that
have made programs meet their users' expectations better, beyond bug
fixing.  E.g.:

    - An automatic error recovery routine is a robustness
      feature, since the user (or client) doesn't then have to
      recover by hand.

    - A command language that requires typing more for a
      dangerous command, or supports undoing, is more robust than
      one that has neither feature, since each makes it harder for
      the user to get in trouble.

There are many more possibilities.  Anything where a system doesn't
meet user expectations because of incomplete or ill-advised design is
fair game.

Your stories will be properly credited in my PhD thesis at CMU, which
is an attempt to build a discrimination net that will aid system
designers and maintainers in improving their designs and programs.

Please send a description of the problem, including an idea of the
task and what was going wrong (or what might have gone wrong) and a
description of the design or fix that handled the problem.  Or, if you
know of a program change log and would be available to answer a
question or two on it, please send it.  I'll extract the reports from
it.

Please send stories and logs to Robustness@CMU-CS-A.  Send queries
about the whole process to Everhart@CMU-CS-A.  I appreciate it--thank
you!

------------------------------

Date: Tue 28 Jun 83 21:35:57-PDT
From: Karl N. Levitt  <LEVITT@SRI-AI.ARPA>
Subject: Program Verification Award  [Long Msg]

               [Reprinted from the UTexas-20 BBoard.]

        ROBERT S. BOYER AND J STROTHER MOORE: RECIPIENTS OF
        THE 1983 JOHN MCCARTHY PRIZE FOR WORK IN PROGRAM
                       VERIFICATION


An anonymous donor has established the John McCarthy Prize, to be 
awarded every two years for outstanding work in Program Verification.
The prize, is intended to recognize outstanding current work -- not 
necessarily work of a milestone value. This first award is for work 
carried out and published during the past 5 years.

Our committee has decided to give the initial award to Robert S. Boyer
and J Strother Moore for work carried out at the following 
institutions: University of Edinburgh, SRI International and, 
currently, the University of Texas. Their main achievement is the 
development of an elegant logic implemented in a very powerful theorem
prover. Particularly noteworthy about the logic is the use of 
induction to express properties about the objects common to programs.
Their theorem prover is among the most powerful of the current 
mechanical provers, combining heuristics in support of automatic 
theorem proving with a user interface that allows a human to drive 
proofs that cannot be accomplished automatically. They have extended 
their theorem prover with a Verification Condition Generator for 
Fortran that handles most of the features -- even those thought to be 
too "dirty" for verification -- of a "real" programming language. They
have used their system to prove numerous applications, including 
programs subtle enough to tax human verifiers, and such real 
applications as crytographic algorithms and simple flight control 
systems; their proofs are always very "honest", using "believable" 
specifications and assuming little more than a core set of axioms.  
Their work has led to a constant stream of high quality publications, 
including the book "A Computational Logic", Academic Press, 1979, and 
a comprehensive User's Manual to the theorem prover.

The other individuals nominated by the committee are the following:  
Donald Good: for the language Gypsy which enhances the possibility for
verifying concurrent and real-time systems, for the verification 
system based on Gypsy, and for carrying out the verification of 
numerous "real" systems; Robin Milner: for the Logic of Computable 
Functions which has led to elegant formal definitions of programming 
languages, to elegant specifications of varied applications, and to a 
powerful mechanical theorem prover; Susan Owicki and David Gries: for 
a practical method for the verification of concurrent programs; and to
Wolfgang Polak: for the verification of a "real" Pascal compiler, 
perhaps the largest and most comlicated program verified to date.

The committee would also like to call attention to interesting and 
important work in a number of areas related to program verification.  
Included herein are the following: the formal definition of large and 
complex programming languages; numerous mechanical verification 
systems for a variety of programming languages; the verification of 
systems covering such applications as computer security, compilers, 
operating systems, fault-tolerant computers, and digital logic; 
program testing; and program transformation. This work indicates that 
program verification (and its extensions) besides being a rich area 
for research gives promise of being usable to achieve reliability when
needed for critical applications.

	  Robert Constable -- Cornell
	  Susan Gerhart -- Wang Institute
	  Karl Levitt (Chairman) -- SRI International
	  David Luckham -- Stanford
	  Richard Platek -- Cornell and Odyssey Research Associates
	  Vaughan Pratt -- Stanford
	  Charles Rich -- MIT

------------------------------

End of AIList Digest
********************

∂06-Jul-83  1833	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #20
Received: from SRI-AI by SU-AI with TCP/SMTP; 6 Jul 83  18:32:50 PDT
Date: Wednesday, July 6, 1983 5:34PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #20
To: AIList@SRI-AI


AIList Digest            Thursday, 7 Jul 1983      Volume 1 : Issue 20

Today's Topics:
  Coupled Systems
  Re: Foundation of Perception, AI
  AI in the media
  Re: Lunar Rovers
  Solution Found to Coin Problem (2)
  HP Computer Colloquium, 7/7/83
  List-of-Lists Updated
----------------------------------------------------------------------

Date: Mon 4 Jul 83 19:25:23-PDT
From: Ira Kalet <IRA@WASHINGTON.ARPA>
Subject: coupled systems

This is in response to the query about when to build an AI "front-end"
to an existing software system as a separate process with its own
address space, as opposed to putting more code in the existing system
to implement the AI component.  At the University of Washington we
have built a very complex graphic simulation system for planning of
radiation therapy treatments for cancer.  We are now starting to work
on a rule based expert system that will model the clinical decision
making part of the process, with the two (separate) systems to
communicate via messages.  We do this as two separate processes
because the simulation system is already a system of multiple 
concurrent processes communicating by messages, and because the
simulation system is written in PASCAL, which seems less suitable
than, for example, INTERLISP, for the AI component.  The kind of
information needed to pass between the systems also affects the
decision.  In our case, the AI system will consult the graphic
treatment planning system for answers to questions that are rather
traditionally compute intensive, eg. radiation dose calculation,
geometric calculations...so the messages are simple and well defined.

------------------------------

Date: Tue, 5 Jul 83 08:16:13 EDT
From: "John B. Black" <Black@YALE.ARPA>
Subject: Re: Foundation of perception, AI


     The recent assertion on this list that "Mind Sciences" (unlike
physics) do not have a "common, roughly correct, theory to start with"
is just dead wrong.  In fact, the study of "naive psychology" (i.e.,
people's folk theories of how other people behave) constitutes a
sizable subfield within formal psychology.  You don't have to be a
professional psychologist to recognize this, just listen to the
conversations around you and you will find a large proportion of them
are composed of people offering explanations and predictions of other
people's behavior.  The source of these explanations and predictions
are, of course, people's folk or naive theories of human behavior (and
these theories ae "roughly correct").  Thus AI and the other "mind
sciences" do seem to be like physics in this regard.

------------------------------

Date: 03 Jul 83  1521 PDT
From: Jim Davidson <JED@SU-AI>
Subject: AI in the media

                [Reprinted from the SU-SCORE BBoard.]

The July issue of Psychology Today contains a letter to the editor, 
which refers to the earlier interview with Roger Schank:

"I was shocked to read Roger Schank's claims of success in building an
English-language front end for a large oil company's geological
mapping system ['Conversation', April].  I was chief programmer of
that system, and it was a dismal failure.  It suffered from the same
disease as all the other "user-friendly" software I have seen.  It is
friendly as long as you play by its rules and tell it what it expects
to hear.  The slightest departure causes apparently random results.

Computers are completely linear in their 'thinking', while the
mind is both linear and at the same time capable of wondrously
spontaneous associations and creative flights into fantasy.  The mind
has an infinite number of scripts, each with hundreds of possible
hooks on which associations with other scripts can be hung.  I don't
think we'll ever duplicate the mind's linguistic ability.
                        Stanley M. Davis
                            Chicago, Ill.  "

------------------------------

Date: 30 Jun 83 9:23:58-PDT (Thu)
From: 
Subject: Re: Lunar Rovers - (nf)
Article-I.D.: ucbcad.188

Another contribution to the growing class of "NOW WAIT A MINUTE"
notes:

        The weight of AI is nearly zero.

Tell me that when you can lift a LISP machine in one hand.

        In addition, the reliability of a system decreases with
        increased quantity of hardware,

Are ECC chips on RAM boards an "increased quantity of hardware"?  
Consider the electrical shielding problems above the atmosphere.

Let's be little more cautious here...

        Flame Off,
                Michael Turner

------------------------------

Date: 5 Jul 83 10:33:11 EDT  (Tue)
From: Dana S. Nau <dsn.umcp-cs@UDel-Relay>
Subject: Re:  a simple logic/number theory/A I/scheduling/graph
         theory problem

    . . .  Using coins of value v[1],...,v[n] find a
    set of coins for which any amount less than M can
    be accumulated and which minimizes the number of
    coins over those such sets.

This problem appears similar (although not identical) to the 0/1 
Knapsack problem, and thus is probably NP-hard.  For approaches to 
solving it, I would recommend Branch and Bound (for example, see 
Fundamentals of Computer Algorithms, by Horowitz and Sahni).
                        Dana S. Nau

------------------------------

Date: 4 Jul 1983 0825-CDT
From: CS.CLINE@UTEXAS-20
Subject: solution found to coin problem

               [Reprinted from the UTexas-20 BBoard.]

The coin problem suggested in my BBOARD message of 1 July has been 
solved. Rich Cohen developed an algorithm and he, Elaine Rich, and I
proved that it solves the problem. Interested parties should contact 
me.

------------------------------

Date: 6 Jul 83 14:00:26 PDT (Wednesday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium, 7/7/83


                Professor Robert Willensky
                Computer Science Department
                U.C. Berkeley

  Talking to UNIX in English: An Overview of an
             On-Line UNIX Consultant


UC (UNIX Consultant) is an intelligent natural language interface that
allows naive users to communicate with the UNIX operating system in 
ordinary English.  The goal of UC is to provide a natural language
help facility that allows new users to learn operating systems'
conventions in a relatively painless way.

UC exploits Artificial Intelligent developments in common sense 
reasoning as well as natural language processing in an attempt to 
provide an interface that is helpful and intelligent, and not merely a
passive repository of facts.  Areas of current research involve 
multi-lingual capabilities, analyzing the user's plan structure via 
natural dialogue, computing possible solutions to a user's problem,
and generating responses in natural language.

        Thursday, July 7, 1983 4:00 pm

        Hewlett-Packard
        Stanford Park Division
        5M conference room
        1501 Page Mill Rd
        Palo Alto, CA 94304

        *** Be sure to arrive at the building's lobby ON TIME, so that
you may be escorted to the conference room.

------------------------------

Date: 1 Jul 1983 0002-PDT
From: Zellich@OFFICE-3 (Rich Zellich)
Subject: List-of-lists updated

OFFICE-3 file <ALMSA>INTEREST-GROUPS.TXT has been updated and is ready
for FTP.  OFFICE-3 supports the net-standard "ANONYMOUS" Login within
FTP, using any password.

INTEREST-GROUPS.TXT is currently 1290 lines (or 52,148 characters).
Please try to limit any weekday FTP jobs to before 0600-CDT and after
1600-CDT if possible, as the system is heavily loaded during most of
the day.

Enjoy, Rich

CHANGES SINCE LAST UPDATE-NOTICE (10 May 83):
   Icon-Group
      Distribution address updated with host name.
   INFO-PRINTERS
      New coordinator.
   PROLOG/PROLOG-HACKERS
      New mailing-lists added.
   SF-LOVERS
      New moderator; Archive references updated for current volume.
   UNIX-WIZARDS
      New host; New coordinator.

[ pkr - note added for sail users: I copied this file into my directory
  as INTERE.TXT[1,PKR]. It should be there for a few days if anyone
  wants to look at it.]
------------------------------

End of AIList Digest
********************

∂11-Jul-83  0352	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #21
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Jul 83  03:51:18 PDT
Date: Saturday, July 9, 1983 4:47PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #21
To: AIList@SRI-AI


AIList Digest            Sunday, 10 Jul 1983       Volume 1 : Issue 21

Today's Topics:
  Prolog Programs [Request]
  Computer Security [Request]
  Re: AI, Perception, and the Media
  AI and Legal Reasoning
  A Statistician's Assistant
  Rovers
  NMODE [LISP-Based Editor] and PSL
----------------------------------------------------------------------

Date: Thu 7 Jul 83 19:37:44-EDT
From: STEVE@COLUMBIA-20.ARPA
Subject: Prolog Programs

I would like to do some statistical analysis on large PROLOG programs.
I am particularly interested in AI programs in the following areas:

                1) Expert Systems,
                2) Data Bases,
                3) Planning or Robotics,
                4) NLP

Can anyone provide sample programs that I can use?  They should be 
large programs that run on Edinburgh Prolog 3.47 (Dec-20) or C-Prolog 
1.2 (Unix 4.1/Vax).  I would like to collect a good variety, so any 
programs will be useful.  I would also appreciate a sample journal of
a session with the program so that it can be exercised quickly and 
effectively.

                Many Thanks... Stephen Taylor

------------------------------

Date: 7 Jul 1983 17:48:15-EDT
From: Ron.Cole at CMU-CS-SPEECH
Subject: Computer Security

                  [Reprinted from the CMUC BBoard.]

ABC Nitely news is doing a feature in response to the movie War Games
to investigate whether the premise of the movie is legitimate: That
there is no totally secure computer.  They want to interview someone
who has broken into a supposedly secure system.  If you want to get
infamous, please call Shelly Diamond or Jean McCormick at 212 887
4995.

------------------------------

Date: Fri 8 Jul 83 15:33:11-PDT
From: Slava Prazdny <Prazdny at SRI-KL>
Subject: Re: AI, Perception, and the Media

It is ridiculous to assume that the "naive theories", in this case of 
perception, will get you somewhere.  In fact, it is easy to see that
they are wrong.  Nobody knows, for example, what the "Mexican hat"
operators, the simple cells, etc. in the cortex are for.

It is common, especially within the AI comunity not to report the
limitations of the achieved success.  No wonder one hears about robots
nearly walking around, and cleaning a house, or walking a dog, etc.  
Or "english interfaces" which are user friendly.  I think it is about
the time we realize, and frankly say, that such interpolations are
very far in the future indeed.

------------------------------

Date: Thu 7 Jul 83 09:01:53-PDT
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: AI and Legal Reasoning


                                  PH.D. ORAL
                                 JULY 15, 1983
                         ROOM 252, MARGARET JACKS HALL
                                   2:15 P.M.
            AN ARTIFICIAL INTELLIGENCE APPROACH TO LEGAL REASONING

                              Anne v.d.L. Gardner

        The analysis of legal problems is a relatively new domain for 
artificial intelligence.  This thesis describes an AI model of legal
reasoning, giving special attention to the distinctive characteristics
of the domain, and reports on a program based on the model.  Major
features include (1) distinguishing between questions the program has
enough information to resolve and questions that competent
professionals could argue either way; (2) using incompletely defined
("open-textured") technical concepts; (3) combining the use of
knowledge expressed as rules and knowledge expressed as examples; and 
(4) combining the use of professional knowledge and commonsense
knowledge.  All these features are likely to prove important in other
domains besides law.  Previous AI research has left them largely
unexplored.

------------------------------

Date: Tue 5 Jul 83 13:20:42-PDT
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: A Statistician's Assistant

[This talk has already been given at SRI and at Stanford.  Printing
seminar notices seem to be a reasonable way to keep the AIList
community informed about current work in AI, even when readers cannot
be expected to attend.  Anyone with strong feelings about this
practice should contact AIList-Request. -- KIL]


                         BUILDING AN EXPERT INTERFACE

                                William A. Gale
                          Bell Telephone Laboratories
                             Murray Hill, NJ 07974


We are building an expert system for the domain of statistical data
analysis, initially focusing on regression analysis.  Two
characteristics of this domain are current availability of massive but
'dumb' software, and a need to repeatedly diagnose problems and apply
a treatment.

REX (Regression EXpert) is a Franz Lisp program which is an
intelligent interface for the S Statistical System.  It guides a user
through a regression analysis, interprets intermediate and final
results, and instructs the user in statistical concepts.  It is
designed for interactive use, but a non-interactive mode can be used
with lower quality results.

[A particular feature of REX is the ability to suggest data
transformations such as a log or squared term.  The BACON system at
CMU can also do this using an entirely different heuristic approach.
Another automated statistical system is the RX medical database
analyzer by Dr. R. Blum at Stanford; it forms and then attempts to
verify sophisticated hypotheses based on knowledge of drug and disease
interactions, lag times of observable effects, and the incomplete
nature of patient histories. -- KIL]

------------------------------

Date: 6 Jul 1983 21:26-PDT
From: Andy Cromarty <andy@aids-unix>
Subject: Rovers

First: Thanks to all who have responded to my initial note about
rovers.

Most people seem to have taken what I would regard as the easy (and 
commensurately uninteresting) way out by choosing a lunar environment,
precisely because teleoperation is feasible there, if a nuisance.  But
what about systems operating on more distant heavenly bodies or in
deep space?  Even robotic vehicles on Mars would suffer rather severe 
performance degradation if they had to rely upon an (approximately) 
earth-bound intelligence for control.  (A friend provides the
following simple gedankenexperiment: decide now to start
scratching-your-leg-until- it-stops-itching twenty minutes from now;
now wait twenty minutes before you can start; then, perhaps, wait at
least twenty minutes before you can consider stopping....)

Note that I'm not taking issue with the desirability of teleoperated 
lunar vehicles.  (In fact, there's good reason to believe that a 
planetary or lunar rover is politically unrealistic if NASA has 
anything to say about it, given what I understand to be the prevailing
NASA attitude towards *unmanned* space exploration, but that fact 
doesn't motivate my comments here.)  Rather, I'm suggesting we tackle
a problem domain sufficiently rich in AI problems to (a) keep things 
interesting and (b) allow us to explore what contribution, if any, we 
might be able to make as computer scientists, AI researchers, and 
engineers.

Do we know enough to solve, or even identify, the difficult issues in 
situation assessment, planning, and resource allocation faced by such
a system?  For example, reinterpreting Professor Minsky's desire that 
"anyone with such budgets should aim them at AI education and research
fellowships", let us then assume that these fellowships are provided
by NASA and have a problem domain specified: perhaps, for example, we 
might choose a space station orbiting Mars as our testing grounds,
with robot assembly prior to arrival of humans on-site as the problem.
What problems can we already solve, and where is the research needed?

                                        asc

------------------------------

Date: 5 Jul 1983 0731-MDT
From: William Galway <Galway@UTAH-20>
Subject: NMODE [LISP-Based Editor] and PSL

           [Reprinted from the Editor-People Discussion.]

I thought I'd add a bit more to what JQJ has said about NMODE, and add
a sales pitch, since I'm pretty close to its development.  NMODE was
written by Alan Snyder (and others) at Hewlett Packard Computer
Research Labs in Palo Alto, with some additional work done by folks
here at the University of Utah.  NMODE is written in PSL (Portable
Standard Lisp), a Lisp dialect developed at the University of Utah
under the direction of Martin Griss.  NMODE is distantly related to
EMODE (my not-quite-finished-thesis-project) in that it shares some of
the ideas and algorithms, but it's carried them much further (and more
cleanly).  (In fact, I hope to steal quite a bit from NMODE for my
final version of EMODE.)

We've tried to make PSL and NMODE quite portable, and we currently
have NMODE running on at least 4 different systems--TWENEX, Vax Unix,
and two different flavors of the Motorola 68000, one of them being the
Apollo.  (The Apollo version was just brought up last week.)

NMODE is quite TWENEX EMACS compatible.  Of course it doesn't have
nearly as many "libraries" developed for it yet.  It has quite a nice
Lisp Mode (of course), including the ability to directly execute code
from a buffer, but is weaker in other modes.  It's quite strong on
handling multiple windows (and multiple simultaneous terminals).
NMODE also supports a generalized browser mechanism (similar to Dired,
RMAIL, and the Smalltalk browser) which provides a common user
interface to file directories, source code, electronic mail,
documentation, etc.

There's a library available for the TWENEX version of NMODE that 
provides a hook to processes similar to what's available in Gosling's
EMACS for Unix.  (Unfortunately, nobody's gotten around to porting
that to the other machines--it's fairly easy to write machine specific
code in PSL, as well as machine independent code.)  We also have a
fairly nice "dynamic abbreviation" option (expands an abbreviation by
scanning the buffer for a word with the same prefix), although we
don't yet have the "standard" EMACS abbreviation mode.

Of course, one of the nicest features of NMODE is the fact that its
implementation language is Lisp.  New extensions can be added simply
by editing code in a buffer, testing it interactively, and then
compiling it.  (Of course, this gets tricky sometimes--it is possible
to break the editor while adding a new feature.)

NMODE does tend to be a bit slow--it seems to perform quite acceptably
on the DEC-20 and on single-user M68000's with lots of real memory.
It tends to be somewhat painful on loaded Vaxen and Apollo 400s with
only 1 megabyte of real memory.  This could probably be improved by
spending more time on tuning the code (or, preferably, by tuning the
PSL compiler or its machine specific tables).

I'd like to take exception to the claim that "PSL is not a very 
powerful lisp", although it is true that "it is not clear it will 
catch on widely".  I don't have extensive experience with any other
Lisp systems, so I'm not really in a good position to compare them.
There are over 700 functions documented in the current PSL manual.
Perhaps the major feature of "bare" PSL is its ability to let you
write Lisp that compiles to "raw" machine code.  This is VERY
important for getting NMODE to run acceptably fast.  Perhaps the idea
that PSL isn't powerful comes from the belief that there are few big
systems built on top of it.  But that's changed quite a lot over last
couple of years.  In addition to NMODE, here's a list of some other
applications built on top of PSL:

   - Hearn's REDUCE computer algebra system.
   - Expert systems developed at HP (using a successor to FRL).
   - Ager's VALID logic teaching program.
   - Riesenfeld's ALPHA-1 Computer Aided Geometric Design
     System.
   - Novak's GLISP, an object oriented dialect of LISP.

NMODE is currently available "for internal use" as part of the PSL
distribution.  Future plans for distribution and maintenance of NMODE
are unclear.  (Nobody's very anxious to get tied up with maintaining
it.)

PSL systems are available from Utah for the following systems:

  VAX, Unix (4.1, 4.1a)     1600 BPI tar format
  DEC-20, Tops-20 V4 & V5   1600 BPI Dumper format
  Apollo, Aegis 5.0         6 floppy disks, RBAK format
  Extended DEC-20,          1600 BPI Dumper format
    Tops-20 V5

We are currently charging a $200 tape or floppy distribution fee for
each system.  To obtain a copy of the license and order form, please
send a NET message or letter with your US MAIL address to:

    Utah Symbolic Computation Group Secretary
    University of Utah - Dept. of Computer Science
    3160 Merrill Engineering Building
    Salt Lake City, Utah 84112

    ARPANET: CRUSE@UTAH-20
    USENET:  utah-cs!cruse

Send a note to me if you're interested in more information on NMODE.

--Will Galway [ GALWAY@UTAH-20 ]

------------------------------

End of AIList Digest
********************

∂18-Jul-83  1950	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #22
Received: from SRI-AI by SU-AI with TCP/SMTP; 18 Jul 83  19:49:39 PDT
Date: Monday, July 18, 1983 3:34PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #22
To: AIList@SRI-AI


AIList Digest            Tuesday, 19 Jul 1983      Volume 1 : Issue 22

Today's Topics:
  A Note from the Moderator
  Response to Extensible Editor Request
  How Many Prologs Are There ?
  Grammar Correction
  Machine Learning Workshop Proceedings
  Upcoming Conferences
  Computers in the Media ...
  CSCSI-84 Call for Papers
----------------------------------------------------------------------

Date: Mon 18 Jul 83 09:10:36-PDT
From: AIList-Request@SRI-AI <Laws@SRI-AI.ARPA>
Subject: A Note from the Moderator

This issue of AIList depends heavily on reprints from several BBoards.
Such reporting is important, but should not be the only function of
this "discussion list".  Lets have a little audience participation.

                                        -- Ken Laws

------------------------------

Date: 25 Jun 1983 1247-EDT
From: Chris Ryland <CPR@MIT-XX>
Subject: Response to Extensible Editor Request

         [Reprinted from the Editor-People discussion list.]

Let me point out that T, the Yale Scheme derivative, has been ported 
to the Apollo, VAX/Unix, VAX/VMS, and, soon, the 370 family, from what
I hear.  It appears to be the most efficient and portable Lisp to
appear on the market.  John O'Donnell at Yale (Odonnell@YALE) is the T
project leader.

------------------------------

Date: 2 Jul 83 13:11:36 EDT  (Sat)
From: Bruce T. Smith <BTS.UNC@UDel-Relay>
Subject: How Many Prologs Are There ?

                 [Reprinted from the Prolog Digest.]

        Here's Randy Harr's latest list of Prolog systems.  He's away 
from CWRU for the summer, and he asked me to keep up the list for him.
Since there have been several requests for information on finding a 
Prolog lately, I've recently submitted it to net.lang.prolog.  The 
info on MU-Prolog is the only thing I've added this summer, from a 
recent mailing from the U. of Melbourne.  (Now, if I could only find 
$100, I would like to try it...)

--Bruce T. Smith, UNC-CH
  duke!unc!bts (USENET)
  bts.unc@udel-relay (lesser NETworks)


list compiled by:  Randolph E. Harr
                   Case Western Reserve University
                   decvax!cwruecmp!harr
                   harr.Case@UDEL-RELAY

{ the list can be FTP'd as [SU-SCORE]PS:<PROLOG>Prolog.Availability.
  SU-SCORE observes Anonymous Login convention.  If you cannot FTP,
  I have a limited number of hard copies I could mail.  -ed }

------------------------------

Date: Mon 18 Jul 83 09:14:25-PDT
From: AIList-Request@SRI-AI <Laws@SRI-AI.ARPA>
Subject: Grammar Correction

The July issue of High Technology has an article titled "Software 
Tackles Grammar".  It includes very brief discussions of the Bell Labs
Writer's Workbench and the IBM EPISTLE systems.

------------------------------

Date: 15 Jul 83 09:25:36 EDT
From: GABINELLI@RUTGERS.ARPA
Subject: Machine Learning Workshop Proceedings

                [Reprinted from the Rutgers BBoard.]

Anyone wishing to order the Proceedings from the MLW can do so by
sending a check made out to the University of Illinois, in the amount
of $27.88 ($25 for Proceedings, $2.88 for postage) to:

            Ms.June Wingler
            Department of Computer Science
            1304 W. Springfield
            University of Illinois
            Urbana, Illinois 61801

------------------------------

Date: Fri 15 Jul 83 11:40:41-PDT
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Upcoming Conferences

                     [Reprinted from SU-BBoard.]

1983 ACM Sigmetrics Conference on Measurement and Modeling of Computer
Systems August 29←31, 1983 Minneapolis, Minn. To register mail to
Registrar, Nolte Center, 315 Pillsbury Drive. S.E. Minneapolis, MN.
55455-0118.  For information contact Steven Bruell CS Dept. Univ. MN
123a Lind Hall 612-376-3958

2nd ACM Sigact-Sigops Symposium on Principles of Distributed Computing
at Le Parc Regent, 3625 Avenue du Parc, Montreal, Quebec, Canada
August 17-19, 1983.  Pre register by July 31, PODC Registration,
%Edward G. H. Smith, The Laurier Group, 275 Slater Street, Suite 1404
Ottawa, Ontario K1P 5H9 Canada.

HL

------------------------------

Date: 16 Jul 83  1610 PDT
From: Jim Davidson <JED@SU-AI>
Subject: Computers in the Media ...

                     [Reprinted from SU-BBoard.]

The August issue of Science Digest has an interview with Joseph
Weizenbaum.

He starts off by saying that the current popularity for personal
computers is something of a fad.  He claims that many of the uses of
PC's, such as storing recipes or recording appointments, are tasks
that are better done manually.

Then the discussion turns to AI:

Science Digest: You know, many of the computer's biggest promoters are
university computer scientists themselves, particularly in the more
exotic areas of computer science, like artificial intelligence.  Roger
Schank of Yale has set up a company, Cognitive Systems, that hopes to
market computer investment counselors, computer will-writers,
computers that can actually mimic a human's performance of a job.
[JED--but they have real trouble locating Bibb County.]  What do you
think of artificial intelligence entering the market place?

Joseph Weizenbaum: I suppose first of all that the word "mimicking" is
fairly significant.  These machines are not realizing human thought 
processes; they're mimicking them.  And I think what's being worked on
these days resembles the language, understanding and production of
human beings only very superficially.  By the way, who needs a
computer will-maker?

SD: Some people can't afford a lawyer.

JW: The poor will be grateful to Dr. Schank for thinking of them...

..

SD: Yet, you know Dr. Schank's firm is videotaping humans in the hope
that by this means it can create a program which closely models the
expertise of the individual.

JW: That attitude displays such a degree of arrogance, such hubris
and, furthermore, a great deal of contempt for human beings.  To think
that one can take a very wise teacher, for example, and by observing
her capture the essence of that person to any significant degree is
simply absurd.  I'd say people who have that ambition, people who that
that it's going to be that easy or possible at all, are simply
deluded.

..

SD: Does it bother you that other computer scientists are marketing 
artificial intelligence?

JW: Yes, it bothers me.  It bothers me to the extent that these
commercial efforts are characterized at the same time as disinterested
science, the search for knowledge for knowledge's sake.  And it isn't.
It's done for money.  These people are spending the only capital
science has to offer:  its good name.  And once we lose that we've
lost everything.

------------------------------

Date: 14 Jul 83 11:10:07-PDT (Thu)
From: decvax!linus!utzoo!utcsrgv!tsotsos @ Ucb-Vax
Subject: CSCSI-84 Call for Papers
Article-I.D.: utcsrgv.1754

                         CALL FOR PAPERS

                         C S C S I - 8 4

                      Canadian Society for
              Computational Studies of Intelligence

                  University of Western Ontario
                         London, Ontario
                         May 18-20, 1984

     The Fifth National Conference of the CSCSI will be held at the
University of Western Ontario in London, Canada.  Papers are requested
in all areas of AI research, particularly those listed below.  The
Program Committee members responsible for these areas are included.

  Knowledge Representation:
    Ron Brachman (Fairchild R & D), John Mylopoulos (U of Toronto)
  Learning:
    Tom Mitchell (Rutgers U), Jaime Carbonell (CMU)
  Natural Language:
    Bonnie Weber (U of Pennsylvania), Ray Perrault (SRI)
  Computer Vision:
    Bob Woodham (U of British Columbia), Allen Hanson (U Mass)
  Robotics:
    Takeo Kanade (CMU), John Hollerbach (MIT)
  Expert Systems and Applications:
    Harry Pople (U of Pittsburgh), Victor Lesser (U Mass)
  Logic Programming:
    Randy Goebel (U of Waterloo), Veronica Dahl (Simon Fraser U)
  Cognitive Modelling:
    Zenon Pylyshyn, Ed Stabler (U of Western Ontario)
  Problem Solving and Planning:
    Stan Rosenschein (SRI), Drew McDermott (Yale)

     Authors are requested to prepare Full papers, of no more than
4000 words in length, or Short papers of no more than 2000 words in
length.  A full page of clear diagrams counts as 1000 words.  When
submitting, authors must supply the word count as well as the area in
which they wish their paper reviewed.  (Combinations of the above
areas are acceptable).  The Full paper classification is intended for
well-developed ideas, with significant demonstration of validity,
while the Short paper classification is intended for descriptions of
research in progress.  Authors must ensure that their papers
describe original contributions to or novel applications of
Artificial Intelligence, regardless of length classification, and
that the research is properly compared and contrasted with relevant
literature.
     Three copies of each submitted paper must be in the hands of the
Program Chairman by December 7, 1983.  Papers arriving after that date
will be returned unopened, and papers lacking word count and
classifications will also be returned.  Papers will be fully reviewed
by appropriate members of the program committee.  Notice of acceptance
will be sent on February 28, 1984, and final camera ready versions are
due on March 31, 1984.  All accepted papers will appear in the
conference proceedings.

     Correspondence should be addressed to either the General Chairman
or the Program Chairman, as appropriate.

  General Chairman                  Program Chairman

  Ted Elcock,                       John K. Tsotsos
  Dept. of Computer Science,        Dept. of Computer Science,
  Engineering and Mathematical      10 King's College Rd.,
       Sciences Bldg.,              University of Toronto,
  University of Western Ontario     Toronto, Ontario, Canada,
  London, Ontario, Canada           M5S 1A4
  N6A 5B9                           (416)-978-3619
  (519)-679-3567

------------------------------

End of AIList Digest
********************

∂21-Jul-83  1918	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #23
Received: from SRI-AI by SU-AI with TCP/SMTP; 21 Jul 83  19:17:33 PDT
Date: Wednesday, July 20, 1983 3:35PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #23
To: AIList@SRI-AI


AIList Digest           Thursday, 21 Jul 1983      Volume 1 : Issue 23

Today's Topics:
  Reply from Cognitive Systems
  Lisp Portability
  UTILISP
  Hampshire College Summer Studies in Mathematics
  Re: CSCSI-84 Call for Papers
  AI Definitions (3)
  HP Computer Colloquium 7/21
  Next AFLB talk(s)
  Special Seminar--C. Beeri
----------------------------------------------------------------------

Date: Tue, 19 Jul 83 18:18:54 EDT
From: Steven Shwartz <Shwartz@YALE.ARPA>
Subject: Reply from Cognitive Systems

The following is a response to the recent letter to the editor of 
Psychology Today that was circulated on AI-List concerning a natural 
language system developed by Cognitive Systems Inc. for an oil 
company.  It states that "[the Cognitive Systems program] is friendly 
as long as you play by its rules and tell it what it expects to hear."

The system in question was not designed nor touted to be a general 
natural language system.  It was designed to understand and respond to
queries about oil wells and topographical maps, and within its 
specified domain, it performs extremely well.  This system has been 
demonstrated at several conferences, most recently the Applied Natural
Language Conference in Santa Monica (February, 1983), where numerous 
members of the academic community tested the system and were favorably
impressed.

It should be noted that the individual who wrote the letter was not 
employed by either Cognitive Systems or the division of the oil 
company which commissioned this program.  In fact, he was a programmer
of the query language that the natural language front end was designed
to replace.

------------------------------

Date: Tue 19 Jul 83 15:24:00-EDT
From: Chip Maguire <Maguire@COLUMBIA-20.ARPA>
Subject: Lisp Portability

  [In response to Chris Ryland's message to Editor-People. -- KIL]

        Once again T is Touted as "... the most efficient and portable
Lisp to appear on the market." As one of the people associated with
the development of PSL (Portable Standard LISP) at the University of
Utah, I feel that I must point out that PSL has been ported to the
Apollo, VAX/UNIX, DECSystem-20/TOPS-20, HP9836/???, Wicat/!?!?!?, and
versions are currently being implemented for the CRAY and 370
families.

The predecessor system "Standard LISP" along with the REDUCE symbolic 
algebra system ran on the following machines (as October 1979):  
Amdahl: 470V/6; CDC: 640, 6600, 7600, Cyber 76; Burroughs: B6700,
B7700; DEC: PDP-10, DECsystem-10, DECsystem-20; CEMA: ES 1040;
Fujitsu: FACOM M-190; Hitachi: MITAC M-160, M-180; Honneywell: 66/60;
Honeywell-Bull:  1642; IBM: 360/44, 360/67, 360/75, 360/91, 370/155,
370/158, 370/165, 370/168, 3033, 370/195; ITEL: AS-6; Siemens: 4004;
Telefunken: TR 440; and UNIVAC: 1108, 1110.

  Then experiments began to port the system without having to deal
with a hand-coded LISP system which was slightly or grossly different
for each machine. This lead to a series of P-coded implementations
(for the 20, PDP-11, Z80, and Cray). This then lead via the Portable
LISP Compiler (Hearn and Griss) to the current compiler-based PSL
system.

So lets hear more about the good ideas in T and fewer nebulous 
comments like: "more efficient and portable".

------------------------------

Date: 19 Jul 1983 13:02:23-EDT
From: Ichiro.Ogata at CMU-CS-G
Subject: UTILISP

                  [Reprinted from the CMU BBoard.]

        I came from Univ. of Tokyo, and brought the MT that contains
  UTILISP ( lisp-machine-lisp like lisp), PROLOG-KR (discribed in
  UTILISP) and AMUSE (Structured Editor).
        It works on IBM 370's (and its compatible machines). If this
interests you, Please contact me.
                Ichiro Ogata io@cmu-cs-g


[and, for AIList, ...]

Yes, we are pleased to deliver UTILISP for all the people.  UTILISP is
written in Asembler, and contains a Compiler.  If you want more
information, please contact our colleges.  Their address is

        Tokyo-To Bunkyo-Ku Hongo
                7chome 3-1
         Tokyo-Daigaku Kogaku-Bu Keisukogaku-Ka
                Wada labolatory

        Ichiro Ogata..

------------------------------

Date: 19 Jul 83 8:59:19-PDT (Tue)
From: ihnp4!houxm!hocda!machaids!pxs @ Ucb-Vax
Subject: Hampshire College Summer Studies in Mathematics
Article-I.D.: machaids.408


(7/17/83):

The 12th Hampshire College Summer Studies in Mathematics for high
ability high school students is now in session until August 19 in
Amherst, MA.  The Summer Studies has initiated a program in cognitive
sciences and is actively seeking foundation and industry support.
(Observers and guest lecturers are invited.)  For more information,
please write David Kelly, Box SS, Hampshire College, Amherst, MA
01002, or call (413) 549-4600 x357 (messages on x371).


Submitted to USENET for David Kelly by Peter Squires, HCSSiM, '77,
                                        ...ihnp4!machaids!pxs

------------------------------

Date: 19 Jul 83 18:43:10 EDT  (Tue)
From: Craig Stanfill <craig.umcp-cs@UDel-Relay>
Subject: Re: CSCSI-84 Call for Papers

    Authors are requested to prepare Full papers, of
    no more than 4000 words in length, or Short papers
    of no more than 2000 words in length.  A full page
    of clear diagrams counts as 1000 words ...

In other words, a picture is worth a thousand words? (ick)

------------------------------

Date: 18 Jul 83 18:13:40 EDT
From: Sri <Sridharan@RUTGERS.ARPA>
Subject: Defining AI ?

                [Reprinted from the Rutgers BBoard.]

I found the following sample entries in a dictionary and thought that
they were good definitions, esp. for a popular dictionary.  Your
reactions are welcome.

Selected entries from the Dictionary of Information Technology by
Dennis Longley and Michael Shain, John Wiley, 1982.

  Artificial Intelligence
    Research and study into methods for the development of
    systems that can demonstrate some of those attributes
    associated with human intelligence, e.g. the ability to
    recognize a variety of patterns from various viewpoints, the
    ability to form hypotheses from a llimited set of
    information, the ability to select relevant information from
    a large set and draw conclusions from it etc.  See Expert
    Systems, Pattern Recognition, Robotics.

  Expert Systems
    In data bases, systems containing a database and associated
    software that enable a user to conduct an apparently
    intelligent dialog with the system in a user oriented
    language.  See Artificial Intelligence.

  Pattern Recognition
    In computing, the automatic recognition of shapes, patterns
    and curves.  The human optical and brain system is much
    superior to the most advanced computer system in matching
    images to those stored in memory.  This area is subject to
    intensive research effort because of its importance in the
    fields of robotics and artificial intelligence, and its
    potential areas of application, e.g.  reading handwritten
    script.  See Artificial Intelligence, Robotics.

  Robotics
    An area of artificial intelligence concerned with robots.

 Robot
    A device that can accept input signals and/or sense
    environmental conditions, process the data so obtained and
    activate a mechanical device to perform a desired action
    relating to the perceived environmental conditions or input
    signal.

------------------------------

Date: 19 Jul 83 09:43:02 EDT
From: Michael <Berman@RUTGERS.ARPA>
Subject: AI Definitions

                [Reprinted from the Rutgers BBoard.]

Speaking as an AI "outsider" the definitions seemed pretty good to me,
except for robotics.  I'm not sure I would classify it as a field of
AI, but rather as one that uses techniques from AI as well as other
areas of computer science and engineering.  Comments?

------------------------------

Date: 19 Jul 83 09:43:10 EDT
From: KELLY@RUTGERS.ARPA
Subject: re: Defining AI?

                [Reprinted from the Rutgers BBoard.]

Those definitions all look pretty good to me, except for the 
content-free entry under EXPERT SYSTEMS.  That is certainly a common
view among implementers of a certain mold (i.e. those coming from an
quasi-N.L. approach, e.g. LUNAR), but I wouldn't say that this is
where the FOCUS of *our* expert systems research has been.  What ever
happened to the reason for calling such beasts "Expert" systems in the
first place?  It certainly wasn't because they were sterling
conversationalists!!

Anyway 4 out of 5 is pretty good.

Sorry to flame on friendly ears.

VK

------------------------------

Date: 18 Jul 83 20:37:04 PDT (Monday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium 7/21


                Guy M. Lohman

                Research Staff Member
                IBM Research Laboratory
                San Jose, CA

                R* Project

The R* project was formed to address the problems of distributed 
databases, with the objective of designing and building an
experimental prototype database management system which would handle
replicated and partitioned data for both query and modification.  The
R* prototype supports a confederation of voluntarily cooperating,
homogeneous, relational database management systems, each with its own
data, sharing data across a communication network.

Two seemingly conflicting goals of distributed databases have been 
resolved efficiently in R*:  single-site image and site autonomy.  To 
make the system easy to use, R* presents a single-site image:  a
user's request for data need not be aware of or specify either the
location or the access path for retrieving that data, requiring close
coordination among sites.  On the other hand, to make local data
available even when other sites or communication lines fail, each R*
database site must be highly autonomous.

The talk will discuss how these goals were compatibly achieved in the 
design and implementation of R* without sacrificing system
performance.

        Thursday, July 21, 1983 4:00 pm

        Stanford Park Labs
        Hewlett Packard
        5M Conference room
        1501 Page Mill Road

*** Be sure to arrive at the building's lobby ON TIME, so that you may
be escorted to the conference room.

------------------------------

Date: Tue 19 Jul 83 22:41:51-PDT
From: Andrei Broder <Broder@SU-SCORE.ARPA>
Subject: Next AFLB talk(s)

                     [Reprinted from SU-BBoard.]


                   N E X T A F L B T A L K (S)

Despite the heat of summer AFLB is still alive!

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


7/21/83 - Michael Luby (Berkeley):

"Monte Carlo Algorithms to Approximate Solutions for NP-hard 
Enumeration and Reliability Problems"

****** Time and place: July 21, 12:30 pm in MJ352 (Bldg. 460) *****

If you'd like an abstract, you should be on the AFLB mailing list. -
Andrei

------------------------------

Date: Tue 19 Jul 83 15:42:54-PDT
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Special Seminar--C. Beeri

                                SPECIAL SEMINAR

                          Thursday - July 21 - 2 P.M.

                  Margaret Jacks Hall (Bldg. 460) - Room 352

              CONCURRENCY CONTROL THEORY FOR NESTED TRANSACTIONS

                                   C. Beeri

Nested transactions occur in many situations, including explicit
nesting in application programs and implicit nesting in computing
systems.  E.g., database systems are usually implemented as multilevel
systems where operations of a high level language are translated in
several stages into programs using low level operations.  This creates
a nested transaction structure.  The same applies to systems that
support atomic data types, or concurrent access to search structures.
Synchronization of concurrent transactions can be performed at one or
more levels.  The existing theory does not provide a framework for
reasoning about concurrency in systems that support nesting.

In the talk, a general nested transaction model will be described.
The model can accomodate most of the nested transaction systems
currently known.  Tools for proving the serilizability of
computations, hence the correctness of the algorithms generating
them, wil be presented.  In particular, it will be shown that the
p r a c t i c a l theory of CPSR logs can be easily generalized
so that previously known results (e.g., correctness of 2PL) can
be used.  Examples will be presented.

------------------------------

End of AIList Digest
********************
∂21-Jul-83  1819	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #24
Received: from SRI-AI by SU-AI with TCP/SMTP; 21 Jul 83  18:19:14 PDT
Date: Thursday, July 21, 1983 4:37PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #24
To: AIList@SRI-AI


AIList Digest            Friday, 22 Jul 1983       Volume 1 : Issue 24

Today's Topics:
  Weizenbaum in Science Digest
  AAAI Preliminary Schedule [Pointer]
  Report on Machine Learning Workshop [Abridged]
----------------------------------------------------------------------

Date: 20 July 1983 22:28 EDT
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Weizenbaum in Science Digest

How much credence do Professor Weizenbaum's ideas get among the
current A.I. community?  How do these statements relate to his work?

-- Steve

------------------------------

Date: 20 Jul 1983 0407-EDT
From: STRAZ.TD%MIT-OZ@MIT-MC
Subject: AAAI Preliminary Schedule

What follows is a complete preliminary schedule for AAAI-83.
Presumably changes are still possible, particularly in times, but it
does tell what papers will be presented.

AAAI-83 THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE at the
Washington Hilton Hotel, Washington, D.C. August 22-26, 1983, 
sponsored by THE AMERICAN ASSOCIATION FOR ARTIFICIAL INTELLIGENCE and
co-sponsored by University of Maryland and George Washington
University.

[Interested readers should FTP file <AILIST>V1N25.TXT from SRI-AI.  It
is about 19,000 characters.  -- KIL]

------------------------------

Date: 19 Jul 1983 1535-PDT
From: Jack Mostow <MOSTOW@USC-ISIF>
Subject: Report on Machine Learning Workshop [Abridged]


             1983 INTERNATIONAL MACHINE LEARNING WORKSHOP:
                          AN INFORMAL REPORT

                              Jack Mostow
                  USC Information Sciences Institute
                          4676 Admiralty Way
                       Marina del Rey, CA. 90291

                       Version of July 18, 1983

  [NOTE: This is a draft of a report to appear in the October 1983
SIGART.  I am circulating it at this time to get comments before
sending it in.  The report should give the flavor of the work
presented at the workshop, but is not intended to be formal, precise,
or complete.  With this understanding, please send corrections and
questions ASAP (before the end of July) to MOSTOW@USC-ISIF.  Thanks.
- Jack]

  The first invitational Machine Learning Workshop was held at C-MU
in the summer of 1980; selected papers were eventually published in
Machine Learning, edited by the conference organizers, Ryszard
Michalski, Jaime Carbonell, and Tom Mitchell.  The same winning team
has now brought us the 1983 International Machine Learning Workshop,
held June 21-23 in Allerton House, an English manor on a park-like
estate donated to the University of Illinois.  The Workshop featured
33 papers, two panel discussions, countless bull sessions, very
little sleep, and lots of fun.

  This totally subjective report tries to convey one participant's
impression of the event, together with a few random thoughts it
inspired.  I have classified the papers rather arbitrarily under the
topics of "Analogy," "Knowledge Transformation," and "Induction"
(broadly construed), but of course 33 independent research efforts
can hardly be expected to fall neatly into any simple classification
scheme.  The papers are discussed in semi-random order; I have tried
to put related papers next to each other.

    [The entire document is about 12 pages of printed text.
     I am abridging it here; interested readers may FTP the
     file <AILIST>V1N24.TXT from SRI-AI. -- KIL]

1. Analogy
     1.1. Lessons
2. Knowledge Transformation
     2.1. Lessons
3. Induction
     3.1. Inducing Rules
     3.2. Dealing with Noise
     3.3. Logic-based Work
     3.4. Cognitive Modelling
     3.5. Lessons
4. Panel Discussion:  Cognitive Modelling -- Why Bother?
5. Panel Discussion:  "Machine Learning -- Challenges of the 80's"


6. A Bit of Perspective
  No overview would be complete without a picture that tries to put
everything in perspective:


     -------------> generalizations ------------
    |                                           |
    |                                           |
INDUCTION                                  COMPILATION
(Knowledge Discovery)                   (Knowledge Transformation)
    |                                           |
    |                                           v
examples ----------- ANALOGY  --------> specialized solutions
                (Knowledge Transfer)

 Figure 6-1:   The Learning Triangle:  Induction, Analogy, Compilation

  Of course the distinction between these three forms of learning
breaks down under close examination.  For example, consider LEX2:
does it induce heuristics from examples, guided by its definition of
"heuristic," or does it compile that definition into special cases,
guided by examples?

7. Looking to the Future
  The 1983 International Workshop on Machine Learning felt like
history in the making.  What could be a more exciting endeavor than
getting machines to learn?  As we gathered for the official workshop
photograph, I thought of Pamela McCorduck's Machines Who Think, and
wondered if twenty years from now this gathering might not seem as
significant as some of those described there.  I felt privileged to
be part of it.

  In the meantime, there are lessons to be absorbed, and work to be
done....

  One lesson of the workshop is the importance of incremental
learning methods.  As one speaker observed, you can only learn things
you already almost know.  The most robust learning can be expected
from systems that improve their knowledge gradually, building on what
they have already learned, and using new data to repair deficiencies
and improve performance, whether it be in analogy [Burstein,
Carbonell], induction [Amarel, Dietterich & Buchanan, Holland,
Lebowitz, Mitchell], or knowledge transformation [Rosenbloom,
Anderson, Lenat].  This theme reflects the related idea of learning
and problem-solving as inherent parts of each other [Carbonell,
Mitchell, Rosenbloom].

  Of course not everyone saw things the way I do.  Here's Tom
Dietterich again: ``I was surprised that you summarized the workshop
in terms of an "incremental" theme.  I don't think incremental-ness
is particularly important--especially for expert system work.
Quinlan gets his noise tolerance by training on a whole batch of
examples at once.  I would have summarized the workshop by saying
that the key theme was the move away from syntax.  Hardly anyone
talked about "matching" and syntactic generalization.  The whole
concern was with the semantic justifications for some learned
concept: All of the analogy folks were doing this, as were Mitchell,
DeJong, and Dietterich and Buchanan.  The most interesting point that
was made, I thought, was Mitchell's point that we need to look at
cases where we can provide only partial justification for the
generalizations.  DeJong's "causal completeness" is too stringent a
requirement.''

  Second, the importance of making knowledge and goals explicit is
illustrated by the progress that can be made when a learner has
access to a description of what it is trying to acquire, whether it
is a criterion for the form of an inductive hypothesis [Michalski et
al] or a formal characterization of the kind of heuristic to be
learned for guiding a search [Mitchell et al].

  Third, as Doug Lenat pointed out, continued progress in learning
will require integrating multiple methods.  In particular, we need
ways to combine analytic and empirical techniques to escape from
their limitations when used alone.

  Finally, I think we can extrapolate from the experience of AI in
the '60's and '70's to set a useful direction for machine learning
research in the '80's.  Briefly, in AI the '60's taught us that
certain general methods exist and can produce some results, while the
'70's showed that large amounts of domain knowledge are required to
achieve powerful performance.  The same can be said for learning.  I
consider a primary goal of AI in the '80's, perhaps the primary goal,
to be the development of general techniques for exploiting domain
knowledge.  One such technique is the ability to learn, which itself
has proved to require large amounts of domain knowledge.  Whether we
approach this goal by building domain-specific learners (e.g.
MetaDendral) and then generalizing their methods (e.g. version space
induction), or by attempting to formulate general methods more
directly, we should keep in mind that a general and robust
intelligence will require the ability to learn from its experience
and apply its knowledge and methods to problems in a variety of
domains.

  A well-placed source has informed me that plans are already afoot
to produce a successor to the Machine Learning book, using the 1983
workshop papers and discussions as raw material.  In the meantime,
there is a small number of extra proceedings which can be acquired
(until they run out) for $27.88 ($25 + $2.88 postage in U.S., more
elsewhere), check payable to University of Illinois.  Order from

     June Wingler
     University of Illinois at Urbana-Champaign
     Department of Computer Science
     1304 W. Springfield Avenue
     Urbana, IL 61801

  There are tentative plans for a similar workshop next summer at
Rutgers.

------------------------------

End of AIList Digest
********************

∂21-Jul-83  1640	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #25
Received: from SRI-AI by SU-AI with TCP/SMTP; 21 Oct 83  16:40:51 PDT
Date: Thursday, July 21, 1983 4:37PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #25
To: AIList@SRI-AI


AIList Digest            Friday, 22 Jul 1983       Volume 1 : Issue 25

Today's Topics:
  AAAI Preliminary Schedule
----------------------------------------------------------------------

Date: 20 Jul 1983 0407-EDT
From: STRAZ.TD%MIT-OZ@MIT-MC
Subject: AAAI Preliminary Schedule

What follows is a complete preliminary schedule for AAAI-83.
Presumably changes are still possible, particularly in times, but it
does tell what papers will be presented.

AAAI-83 THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE at the
Washington Hilton Hotel, Washington, D.C. August 22-26, 1983, 
sponsored by THE AMERICAN ASSOCIATION FOR ARTIFICIAL INTELLIGENCE and
co-sponsored by University of Maryland and George Washington
University.

CONFERENCE SCHEDULE

SUNDAY, AUGUST 21
←←←←←←←←←←←←←←←←←

5:30-7:00 CONFERENCE, TUTORIAL, AND TECHNOLOGY TRANSFER SYMPOSIUM REGISTRATION

MONDAY, AUGUST 22 - FRIDAY, AUGUST 26
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

9:00-5:00 AAAI-83 R & D EXHIBIT PROGRAM 

WEDNESDAY, AUGUST 24 - FRIDAY, AUGUST 26
--------------------------------------

8:00 p.m.- SMALL GROUP MEETINGS : please sign up for rooms at the information
           desk in the Concourse Lobby.

SUNDAY, AUGUST 21 - THURSDAY, AUGUST 23
----------------------------------------

7:00 p.m. FREDKIN- AAAI COMPUTER CHESS TOURNAMENT

Each night at 7:00 p.m., the Fredkin-AAAI Tournament will demonstrate
the Turing Test where human players do not know if they are playing
a machine or other human players with equal probability.  Human players
will be rewarded primarily for winning, but secondarily for guessing the 
genus of their opponent.  The audience also will be kept in the dark,
and there should be some fun in guessing who is who as the game progresses.

There will be three games per night; each night, two games will pit
a human being against a computer and the other game will pit two
human players against each other.  The computer system's names are
Belle and Nuches.

TUTORIAL PROGRAM

MONDAY, AUGUST 22 - TUESDAY, AUGUST 23
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

8:00-5:00 TUTORIAL REGISTRATION in the CONCOURSE LOBBY, CONCOURSE LEVEL

MONDAY, AUGUST 22
←←←←←←←←←←←←←←←←←

9:00-1:00- TUTORIAL NUMBER 1: AN INTRODUCTION TO ARTIFICIAL INTELLIGENCE
			Dr. Eugene Charniak, Brown University
			    
9:00-1:00  TUTORIAL NUMBER 2: AN INTRODUCTION TO ROBOTICS
			Dr. Richard Paul, Purdue University

2:00-6:00  TUTORIAL NUMBER 3: NATURAL LANGUAGE PROCESSING
		        Dr. Gary G. Hendrix, SYMANTEC, Inc.

2:00-6:00  TUTORIAL NUMBER 4: EXPERT SYSTEMS - PART 1 - FUNDAMENTALS
			Drs. Randall Davis and Charles Rich, MIT

TUESDAY, AUGUST 23
←←←←←←←←←←←←←←←←←←

9:00-1:00 TUTORIAL NUMBER 5: EXPERT SYSTEMS - PART 2 - APPLICATION AREAS
			Drs. Randall Davis and Charles Rich, MIT

9:00-1:00 TUTORIAL NUMBER 6: AI PROGRAMMING TECHNOLOGY - LANGUAGES AND MACHINES
			Dr. Howard Shrobe, MIT and Symbolics
		        Dr. Larry Masinter, Xerox Palo Alto Research Center
				
MONDAY, AUGUST 22
←←←←←←←←←←←←←←←←←

8:00-5:00 TECHNOLOGY TRANSFER SYMPOSIUM REGISTRATION in CONCOURSE LOBBY

TUESDAY, AUGUST 23
←←←←←←←←←←←←←←←←←←

8:00-2:00 TECHNOLOGY TRANSFER SYMPOSIUM REGISTRATION in CONCOURSE LOBBY

2:00-9:30 TECHNOLOGY TRANSFER SYMPOSIUM (6-7:30 dinner break)

TECHNICAL WORKSHOPS
←←←←←←←←←←←←←←←←←←←

MONDAY, AUGUST 22 AND TUESDAY, AUGUST 23
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

9:00-5:00 SENSORS AND ALGORITHMS FOR 3-D VISION Dr. Azriel Rosenfeld, Maryland

9:00-5:00 PLANNING organized by Dr. Robert Wilensky, Berkeley

HOSPITALITY
←←←←←←←←←←←

MONDAY, AUGUST 22
←←←←←←←←←←←←←←←←←

6:00-8:00 RECEPTION (Welcome!) in the CONCOURSE EXHIBIT HALL, CONCOURSE LEVEL

TUESDAY, AUGUST 23
←←←←←←←←←←←←←←←←←←

5:30-7:00 CONFERENCE REGISTRATION RECEPTION; INTERNATIONAL TERRACE

WEDNESDAY, AUGUST 24
←←←←←←←←←←←←←←←←←←←←

6:00-8:00 MAIN CONFERENCE RECEPTION (NO HOST BAR); INTERNATIONAL TERRACE

THURSDAY, AUGUST 25
←←←←←←←←←←←←←←←←←←←

6:00-7:00 BOARDING BUSES FOR GALA at the T STREET ENTRANCE, TERRACE LEVEL
				
7:00-10:30 GALA RECEPTION AND ENTERTAINMENT AT THE CAPITOL CHILDREN'S MUSEUM 
           (NO HOST BAR) *** RESERVATIONS ONLY ***
				
FRIDAY, AUGUST 26
←←←←←←←←←←←←←←←←←

6:00-8:00 HAIL AND FAREWELL in the INTERNATIONAL BALLROOM EAST

TECHNICAL CONFERENCE SCHEDULE
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←	

* PLEASE NOTE: Depending on the size of attendance, closed circuit T.V.
will be available Wednesday, August 24 thru Friday, August 26, for
particular sessions (that is, those sessions scheduled for the
International Ballroom Center and West).  The closed circuit
T.V. rooms will be the Georgetown Room, Concourse Level, and the
Back Terrace, Terrace Level.

MONDAY, AUGUST 22
←←←←←←←←←←←←←←←←←

8:00-5:00 TECHNICAL CONFERENCE REGISTRATION

TUESDAY, AUGUST 23
←←←←←←←←←←←←←←←←←←

8:00-7:00 TECHNICAL CONFERENCE REGISTRATION 

7:00 p.m. SPECIAL SESSION dedicated to Dr. Victor Lesser, USSR

WEDNESDAY, AUGUST 24
←←←←←←←←←←←←←←←←←←←←

8:00-5:00 TECHNICAL CONFERENCE REGISTRATION

KNOWLEDGE REPRESENTATION AND PROBLEM SOLVING SESSION I
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

9:00-9:20 AN OVERVIEW OF META-LEVEL ARCHITECTURE Michael Genesereth, Stanford 
				
9:20-9:40 FINDING ALL OF THE SOLUTIONS TO A PROBLEM David Smith, Stanford 
				
9:40-10:00 COMMUNICATION & INTERACTION IN MULTI-AGENT PLANNING
           Michael Georgeff, SRI

10:00-10:20 DATA DEPENDENCIES ON INEQUALITIES Drew McDermott, Yale 

10:20-10:40 KRYPTON: INTEGRATING TERMINOLOGY & ASSERTION 
            Ronald Brachman and Hector Levesque, Fairchild AI Laboratory
            Richard Fikes, Xerox PARC

in the INTERNATIONAL BALLROOM CENTER

COGNITIVE MODELLING SESSION I
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

9:00-9:20 THREE DIMENSIONS OF DESIGN DEVELOPMENT Neil M. Goldman, USC/ISI

9:20-9:40 SIX PROBLEMS FOR STORY UNDERSTANDERS 	Peter Norvig, Berkeley

9:40-10:00 PLANNING AND GOAL INTERACTION: THE USE OF PAST SOLUTIONS IN PRESENT
           SITUATIONS Kristian Hammond, Yale 

10:00-10:20 A MODEL OF INCREMENTAL LEARNING BY INCREMENTAL AND ANALOGICAL 
            REASONING & DEBUGGING Mark Burnstein, Yale 

10:20-10:40 MODELLING OF HUMAN KNOWLEDGE ROUTES: PARTIAL AND INDIVIDUAL 
            VARIATION Benjamin Kuipers, Tufts 

in the INTERNATIONAL BALLROOM WEST

VISION AND ROBOTICS SESSION I
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

9:00-9:20 A VARIATIONAL APPROACH TO EDGE DETECTION John Canny, MIT

9:20-9:40 SURFACE CONSTRAINTS FROM LINEAR EXTENTS John Kender, Columbia

9:40-10:00 AN ITERATIVE METHOD FOR RECONSTRUCTING CONVEX POLYHEDRA FROM 
           EXTENDED GAUSSIAN IMAGES James J. Little, U.of British Columbia

10:00-10:20 TWO RESULTS CONCERNING AMBIGUITY IN SHAPE FROM SHADING
            M.J. Brooks, Flinders University of South Australia

In the INTERNATIONAL BALLROOM EAST


10:40-11:00 BREAK

11:00-12:30 PANEL: LOGIC PROGRAMMING
            Howard Shrobe, Organizer, MIT
            Michael Genesereth, Stanford,
            J. Alan Robinson, David Warren, SRI International

In the INTERNATIONAL BALLROOM CENTER

12:30-2:00 LUNCH BREAK
           ANNUAL SIGART BUSINESS MEETING in the HEMISPHERE ROOM

2:00-3:10 INVITED LECTURE: THE STATE OF THE ART IN COMPUTER LEARNING
          Douglas Lenat, Stanford in the INTERNATIONAL BALLROOM CENTER

3:10-3:30 BREAK

NATURAL LANGUAGE SESSION I
←←←←←←←←←←←←←←←←←←←←←←←←←←

3:30-3:50 RECURSION IN TEXT AND ITS USE IN LANGUAGE GENERATION 
           Kathleen McKeown, Columbia

3:50-4:10 RELAXATION IN REFERENCE Bradley Goodman, BBN

4:10-4:30 TRACKING USER GOALS IN AN INFORMATION-SEEKING ENVIRONMENT 
          M. Sandra Carberry, Delaware

4:30-4:50 REASONS FOR BELIEFS IN UNDERSTANDING: APPLICATIONS OF NON-MONOTONIC
          DEPENDENCIES TO STORY PROCESSING Paul O' Rorke, Illinois

4:50-5:10 RESEARCHER: AN OVERVIEW Michael Lebowitz, Columbia 
	
in the INTERNATIONAL BALLROOM EAST

LEARNING SESSION I
←←←←←←←←←←←←←←←←←←

3:30-3:50 EPISODIC LEARNING Dennis Kibler and Bruce Porter, California-Irvine

3:50-4:10 HUMAN PROCEDURAL SKILL ACQUISITION: THEORY, MODEL AND PSYCHOLOGICAL
          VALIDATION Kurt VanLehn, Xerox PARC

4:10-4:30 A PRODUCTION SYSTEM FOR LEARNING FROM AN EXPERT
          D. Paul Benjamin and Malcolm Harrison, Courant Institute, NYU

4:30-4:50 OPERATOR DECOMPOSABILITY: A NEW TYPE OF PROBLEM STRUCTURE 
          Richard Korf, CMU

4:50-5:10 SCHEMA SELECTION AND STOCHASTIC INFERENCE IN MODULAR 	ENVIRONMENT
          Paul Smolensky, UCSD

in the INTERNATIONAL BALLROOM WEST

EXPERT SYSTEMS SESSION I
------------------------

3:30-3:50 THE DESIGN OF A LEGAL ANALYSIS PROGRAM Anne v.d.L. Gardner, Stanford

3:50-4:10 THE ADVANTAGES OF ABSTRACT CONTROL KNOWLEDGE IN EXPERT SYSTEM DESIGN
          William J. Clancey, Stanford 

4:10-4:30 THE GIST BEHAVIOR EXPLAINER William Swartout, USC/ISI

4:30-4:50 A COMPARATIVE STUDY OF CONTROL STRATEGIES FOR EXPERT SYSTEMS: AGE 
          IMPLEMENTATION OF THE THREE VARIATIONS OF PUFF 
          Nelleke Aiello, Stanford 

4:50-5:10 A RULE-BASED APPROACH TO INFORMATION RETRIEVAL: SOME RESULTS AND 
          COMMENTS Richard Tong, Daniel Shapiro, Brian McCune & Jeffrey Dean,
          Advanced Information & Decision Systems

5:10-5:30 EXPERT SYSTEM CONSULTATION CONTROL STRATEGY James Slagle and Michael
          Gaynor, Naval Research Laboratory

in the INTERNATIONAL BALLROOM CENTER

7:00 P.M. AAAI EXECUTIVE COMMITTEE MEETING 

THURSDAY, AUGUST 25
←←←←←←←←←←←←←←←←←←←

8:00-5:00 TECHNICAL CONFERENCE REGISTRATION in the CONCOURSE LOBBY

KNOWLEDGE REPRESENTATION AND PROBLEM SOLVING SESSION II
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

9:00-9:20 THE DENOTATIONAL SEMANTICS OF HORN CLAUSES AS A PRODUCTION SYSTEM
          J-L. Lassez and M. Maher, University of Melbourne

9:20-9:40 THEORY RESOLUTION: BUILDING IN NONEQUATIONAL THEORIES
          Mark Stickel, SRI International

9:40-10:00 IMPROVING THE EXPRESSIVENESS OF MANY SORTED LOGIC 
           Anthony Cohn, University of Warwick

10:00-10:20 THE BAYESIAN BASIS OF COMMON SENSE MEDICAL DIAGNOSIS 
            Eugene Charniak, Brown

10:20-10:40 ANALYZING THE ROLES OF DESCRIPTIONS AND ACTIONS IN OPEN SYSTEMS
            Carl Hewitt and Peter DeJong, MIT

in the INTERNATIONAL BALLROOM CENTER

NATURAL LANGUAGE SESSION II
←←←←←←←←←←←←←←←←←←←←←←←←←←←

9:00-9:20 PHONOTACTIC AND LEXICAL CONSTRAINTS IN SPEECH RECOGNITION 
          Daniel P. Huttenlocher and Victor W. Zue, MIT

9:20-9:40 DETERMINISTIC AND BOTTOM-UP PARSING IN PROLOG 
          Edward Stabler, Jr., University of Western Ontario

9:40-10:00 MCHART: A FLEXIBLE, MODULAR CHART PARSING SYSTEM 
           Henry Thompson, Edinburgh

10:00-10:20 INFERENCE-DRIVEN SEMANTIC ANALYSIS Martha Stone Palmer, Penn & SDC

10:20-10:40 MAPPING BETWEEN SEMANTIC REPRESENTATIONS USING HORN CLAUSES
	    Ralph M. Weischedel, Delaware

in the INTERNATIONAL BALLROOM WEST

SEARCH SESSION I
←←←←←←←←←←←←←←←←

9:00-9:20 A THEORY OF GAME TREES Chun-Hung Tzeng, Paul Purdom, Jr., Indiana

9:20-9:40 OPTIMALITY OF A A* REVISITED 	Rina Dechter and Judea Pearl, UCLA

9:40-10:00 SOLVING THE GENERAL CONSISTENT LABELING (OR CONSTRAINT SATISFACTION)
           TWO ALGORITHMS AND THEIR EXPECTED COMPLEXITIES Bernard Nudel,Rutgers

10:00-10:20 THE COMPOSITE DECISION PROCESS: A UNIFYING FORMULATION FOR 
            HEURISTIC SEARCH, DYNAMIC PROGRAMMING AND BRANCH & BOUND PROCEDURES
            Vipin Kumar, Texas & Laveen Kanal, Maryland

10:20-10:40 NON-MINIMAX SEARCH STRATEGIES FOR USE AGAINST FALLIBLE OPPONENTS
            Andrew Louis Reibman and Bruce Ballard, Duke 

in the INTERNATIONAL BALLROOM EAST

10:40-11:00 BREAK

11:00-12:30 AAAI PRESIDENTIAL ADDRESS Nils Nilsson, SRI International
            ANNOUNCEMENT OF THE PUBLISHER'S PRIZE
            AAAI COMMENDATION FOR EXCELLENCE to MARVIN DENICOFF, Office of 
            Naval Research

in the INTERNATIONAL BALLROOM CENTER

12:30-2:00 LUNCH BREAK
           ANNUAL AAAI BUSINESS MEETING in the INTERNATIONAL BALLROOM CENTER

2:00-3:10 THE GREAT DEBATE: METHODOLOGIES FOR AI RESEARCH 
          John McCarthy, Stanford vs. Roger Schank, Yale 
				 	
in the INTERNATIONAL BALLROOM CENTER


3:10-3:30 BREAK

KNOWLEDGE REPRESENTATION AND PROBLEM SOLVING SESSION III
-------------------------------------------------------

3:30-3:50 PROVING THE CORRECTNESS OF DIGITAL HARDWARE DESIGNS
          Harry G. Barrow, Fairchild AI Laboratory

3:50-4:10 A CHESS PROGRAM THAT CHUNKS Murray Campbell & Hans Berliner, CMU

4:10-4:30 THE DECOMPOSITION OF A LARGE DOMAIN: REASONING ABOUT MACHINES
          Craig Stanfill, Maryland

4:30-4:50 REASONING ABOUT STATE FROM CAUSATION AND TIME IN A MEDICAL DOMAIN
          William Long, MIT

4:50-5:10 THE USE OF QUALITATIVE AND QUANTITATIVE SIMULATIONS Reid Simmons, MIT

5:10-5:30 AN AUTOMATIC ALGORITHM DESIGNER: AN INITIAL IMPLEMENTATION
          Elaine Kant and Allen Newell, CMU

in the INTERNATIONAL BALLROOM EAST

LEARNING SESSION II
←←←←←←←←←←←←←←←←←←←

3:30-3:50 WHY AM AND EURISKO APPEAR TO WORK? 
          Douglas Lenat, Stanford, John Seely Brown, Xerox PARC

3:50-4:10 LEARNING PHYSICAL DESCRIPTIONS FROM FUNCTIONAL DEFINITIONS, EXAMPLES,
          AND PRECEDENTS Patrick Winston & Boris Katz, MIT, Thomas Binford & 
          Michael Lowry, Stanford 

4:10-4:30 A PROBLEM-SOLVER FOR MAKING ADVICE OPERATIONAL Jack Mostow, USC/ISI

4:30-4:50 GENERATING HYPOTHESES TO EXPLAIN PREDICTION FAILURES 
          Steven Salzberg, Yale 

4:50-5:10 LEARNING BY RE-EXPRESSING CONCEPTS FOR EFFICIENT RECOGNITION
          Richard Keller, Rutgers 

in the INTERNATIONAL BALLROOM WEST

EXPERT SYSTEMS SESSION II
←←←←←←←←←←←←←←←←←←←←←←←←←

3:30-3:50 DIAGNOSIS VIA CAUSAL REASONING: PATHS OF INTERACTION AND THE 
          LOCALITY PRINCIPLE Randall Davis, MIT

3:50-4:10 A NEW INFERENCE METHOD FOR FRAME-BASED EXPERT SYSTEMS 
          James Reggia, Dana Nau, Pearl Wang, Maryland

4:10-4:30 ANALYSIS OF PHYSIOLOGICAL BEHAVIOR USING A CAUSAL MODLE BASED ON 
          FIRST PRINCIPLES John C. Kunz, Stanford 

4:30-4:50 AN INTELLIGENT AID FOR CIRCUIT REDESIGN Tom Mitchell, Louis 
          Steinberg, Smadar Kedar-Cabelli, Van Kelly, Jeffrey Shulman, 
          Timothy Weinrich, Rutgers 

4:50-5:10 TALIB: AN IC LAYOUT DESIGN ASSISTANT Jin Kim and John McDermott, CMU

in the INTERNATIONAL BALLROOM CENTER

FRIDAY, AUGUST 26
←←←←←←←←←←←←←←←←←

KNOWLEDGE REPRESENTATION & PROBLEM SOLVING SESSION IV
------------------------------------------------------

9:00-9:20 ON INHERITANCE HIERARCHIES WITH EXCEPTIONS David W. Etherington, 
          University of British Columbia, Raymond Reiter, UBC and Rutgers

9:20-9:40 DEFAULT REASONING AS LIKELIHOOD REASONING Elaine Rich, Texas

9:40-10:00 DEFAULT REASONING USING MONOTONIC LOGIC: A MODEST PROPOSAL
           Jane Terry Nutter, Tulane 

10:00-10:20 A THEOREM-PROVER FOR A DETECTABLE SUBSET OF DEFAULT LOGIC
            Philippe Besnard, Rene Quiniou,&Patrice Quinton, IRISA-INRIA Rennes

10:20-10:40 DERIVATIONAL ANALOGY AND ITS ROLE IN PROBLEM SOLVING
            Jaime Carbonell, CMU

in the INTERNATIONAL BALLROOM CENTER

COGNITIVE MODELLING SESSION II
------------------------------

9:00-9:20 STRATEGIST: A PROGRAM THAT MODELS STRATEGY-DRIVEN AND CONTENT-DRIVEN
          INFERENCE BEHAVIOR Richard Granger, Jennifer Holbrook, and
          Kurt Eiselt, California-Irvine

9:20-9:40 LEARNING OPERATOR SEMANTICS BY ANALOGY
Sarah Douglas, Stanford & Xerox PARC, Thomas Moran, Xerox PARC

9:40-10:00 AN ANALYSIS OF A WELFARE ELIGIBILITY DETERMINATION INTERVIEW: 
           A PLANNING APPROACH 	Eswaran Subrahmanian, CMU

in the INTERNATIONAL BALLROOM EAST

VISION AND ROBOTICS SESSION II
------------------------------

9:00-9:20 THE PERCEPTUAL ORGANIZATION AS BASIS FOR VISUAL RECOGNITION
          David Lowe and Thomas Binford, Stanford

9:20-9:40 MODEL BASED INTERPRETATION OF RANGE IMAGERY
          Darwin Kuan and Robert Drazovich, AI&DS		

9:40-10:00 A DESIGN METHOD FOR RELAXATION LABELING APPLICATIONS
           Robert Hummel, Courant Institute, NYU

10:00-10:20 APPROPRIATE LENGTHS BETWEEN PHALANGES OF MULTI JOINTED FINGERS FOR
            STABLE GRASPING Tokuji Okada and Takeo Kanade, CMU

10:20-10:40 FIND-PATH FOR A PUMA-CLASS ROBOT Rodney Brooks, MIT

in the INTERNATIONAL BALLROOM WEST

10:40-11:00 BREAK

11:00-12:30 PANEL: ADVANCED HARDWARE ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE
	    Allen Newell, Organizer, CMU

in the INTERNATIONAL BALLROOM 

12:30-2:00 LUNCH BREAK
           AAAI SUBGROUP: AI IN MEDICINE MEMBERSHIP MEETING in HEMISPHERE ROOM

2:00-3:10 INVITED LECTURE - THE STATE OF THE ART IN ROBOTICS Michael Brady, MIT

in the INTERNATIONAL BALLROOM

3:10-3:30 BREAK

SEARCH SESSION II 
-----------------

3:30-3:50 INTELLIGENT CONTROL USING INTEGRITY CONSTRAINTS 
          Madhur Kohli and Jack Minker, Maryland

3:50-4:10 PREDICTING THE PERFORMANCE OF DISTRIBUTED KNOWLEDGE-BASED SYSTEMS:
          MODELLING APPROACH Jasmina Pavlin, UMASS

in the INTERNATIONAL BALLROOM EAST

LEARNING SESSION III 
--------------------

3:30-3:50 LEARNING: THE CONSTRUCTION OF A POSTERIORI KNOWLEDGE STRUCTURES
          Paul Scott, University of Michigan

3:50-4:10 A DOUBLY LAYERED, GENETIC PENETRANCE LEARNING SYSTEM
          Larry Rendell, University of Guelph

4:10-4:30 AN ANALYSIS OF GENETIC-BASED PATTERN TRACKING	AND COGNITIVE-BASED 
          COMPONENT TRACKING MODELS OF ADAPTATION
          Elaine Pettit and Kathleen Swigger, North Texas State University

in the INTERNATIONAL BALLROOM CENTER

SUPPORT HARDWARE AND SOFTWARE SESSION
-------------------------------------

3:30-3:50 MASSIVELY PARALLEL ARCHITECTURES FOR AI: NETL, THISTLE, AND BOLTZMANN
          MACHINES Scott Fahlman, Geoffrey Hinton, CMU,	Terrence Sejnowski, JHU

3:50-4:10 YAPS: A PRODUCTION RULE SYSTEMS MEETS OBJECTS 
          Elizabeth Allen, Maryland

4:10-4:30 SPECIFICATION-BASED COMPUTING ENVIRONMENTS Robert Balzer, David Dyer,
          Mathew Morgenstern, and Robert Neches, USC/ISI

in the INTERNATIONAL BALLROOM WEST

------------------------------

End of AIList Digest
********************

∂25-Jul-83  2359	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #26
Received: from SRI-AI by SU-AI with TCP/SMTP; 25 Jul 83  23:58:54 PDT
Date: Monday, July 25, 1983 10:15PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #26
To: AIList@SRI-AI


AIList Digest            Tuesday, 26 Jul 1983      Volume 1 : Issue 26

Today's Topics:
  AAAI-83 Schedule on USENet
  Roommates Wanted for AAAI
  Artificial Intelligence Info for kids
  Preparing Govmt Report on Canadian AI Research
  Definitions (2)
  Expectations of Expert System Technology
  Portable and More Efficient Lisps (3)
----------------------------------------------------------------------

Date: 24 Jul 83 20:20:09-PDT (Sun)
From: decvax!linus!utzoo!utcsrgv!peterr @ Ucb-Vax
Subject: AAAI-83 sched. avail. on USENet
Article-I.D.: utcsrgv.1828

I have a somewhat compressed, but still large (18052 ch.), on-line
version of the AAAI-83 schedule that I'm willing to mail to USENet
people on request.
   peter rowley, U. Toronto CSRG
   {cornell,watmath,ihnp4,floyd,allegra,utzoo,uw-beaver}!utcsrgv!peterr
 or {cwruecmp,duke,linus,lsuc,research}!utzoo!utcsrgv!peterr

------------------------------

Date: 22 Jul 83 10:34:11-PDT (Fri)
From: decvax!linus!utzoo!hcr!ravi @ Ucb-Vax
Subject: Room-mates wanted for AAAI
Article-I.D.: hcr.451

A friend (Mike Rutenberg) and I are going to AAAI at the end of
August.  We'd like to find a couple of people to share a room with --
both to meet interesting people and to save some money.  If you're
interested, please let me know by mail.

Also, if you have any other useful hints (like cheap transportation
from Ontario or better places to stay than the Hilton), please drop me
a line.

Thanks for your help.
        --ravi

------------------------------

Date: 24 Jul 1983 0727-CDT
From: Clive Dawson <CC.Clive@UTEXAS-20>
Subject: Artificial Intelligence Info for kids

               [Reprinted from the UTexas-20 BBoard.]

I received a letter from an 8th grader in Houston who wants to do a
science fair project on Artificial Intelligence.

        "...I plan to explain and demonstrate this topic with
         my computer and a program I made on it concerning this
         topic.  Any information you could send for my research
         would be appreciated."

If anybody knows of any source of AI information suitable for Jr. High
School level (good magazine articles written for the layman, etc.)
please let me know.  I have come across such stuff every so often, but
I'm having trouble remembering where.

Thanks,

Clive

------------------------------

Date: 23 Jul 83 16:30:27-PDT (Sat)
From: decvax!linus!utzoo!utcsrgv!zenon @ Ucb-Vax
Subject: Preparing Govmt report on Canadian AI research
Article-I.D.: utcsrgv.1823

A consortium of 4 groups has been awarded a contract by the Secretary
of State to prepare a report on what Canada ought to be doing to
support R & D in artificial intelligence in the next 5-10 years.  The
groups are Quasar Systems of Ottawa, Nordicity Group of Toronto,
Socioscope of Ottawa, and a group of academic AI people (Pylyshyn,
Mackworth, Skuce, Kittredge, Isabel, with consultants Tsotso,
Mylopoulos, Zucker, Cercone).  Because the client's primary interest
is in language (esp. translation) the report will concentrate on that
aspect, though we plan to cover all of AI on the grounds that it's all
of a piece.  The contract period is July-Dec 1983.  I am coordinating
the technical part of the report.

We are seeking input from all interested parties.  I will be touring
Canada, probably in September, and would like to talk to anyone who
has an AI lab and some ideas about where Canada ought to focus.  I am
especially eager to receive input from, and information about,
what's happening in Canadian industry.

I welcome all suggestions and invitations.  This is the first AI study
commissioned by a federal agency on AI and we should take this as an
opportunity to give them a good cross-section of views.

Zenon Pylyshyn, Centre for Cognitive Science, University of Western
Ontario, London, Ontario, N6A 5C2.  (519)-679-2461

utcsrgv!zenon or on the ARPANET Pylyshyn@CMU-CS-C

------------------------------

Date: Fri 22 Jul 83 09:32:16-EDT
From: MASON@CMU-CS-C.ARPA
Subject: Re: definition of robot

I think the definition of robot is a little too broad.  I've long been
reconciled to definitions which include, for instance,
cam-programmable sewing machines, but this new definition even
includes pistols.  (An input signal, trigger pressure, is processed
mechanically to actuate a mechanical device, the bullet.)  Of course,
if the NRA decided to lobby for robots ...

------------------------------

Date: Fri 22 Jul 83 09:22:54-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Definitions

Here are a few definitions taken from a Teknowledge/CEI ad:

  Artificial Intelligence
    That subfield of Computer Science which is concerned with
    symbolic reasoning and problem solving by computer.

  Knowledge Engineering
    The engineering discipline whereby knowledge is integrated
    into computer systems in order to solve complex problems
    normally required [sic] in a high level of human expertise.

  Knowledge/Expert Systems
    Computer systems that embody knowledge including inexact,
    heuristic and subjective knowledge; the results of knowledge
    engineering.

  Knowledge Representation
    A formalism for representing facts and rules about a subject
    or specialty.

  Knowledge Base
    A base of information encoded in a knowledge representation
    for a particular application.

  Inference Technique
    A methodology for reasoning about information in knowledge
    representation [sic] and drawing conclusions from that knowledge.

  Task Domains
    Application areas for knowledge systems such as analysis of
    oil well drilling problems or identification of computer
    system failures.

  Heuristics
    The informal, judgmental knowledge of an application area
    that constitutes the ``rules of good judgement'' in the field.
    Heuristics also encompass the knowledge of how to solve problems
    efficiently and effectively, how to plan steps in solving
    a complex problem, how to improve performance, and so forth.

  Production Rules
    A widely-used knowledge representation in which knowledge
    is formalized into ``rules'' containing an ``IF'' part and
    a ``THEN'' part (also called a condition and an action).
    The knowledge represented by the production rule is applicable
    to a line of reasoning if the IF part of the rule is satisfied:
    consequently, the THEN part can be concluded or its
    problem-solving action taken.

                                        -- Ken Laws

------------------------------

Date: 24 Jul 83 1:41:35-PDT (Sun)
From: decvax!linus!utzoo!utcsrgv!peterr @ Ucb-Vax
Subject: Expectations of expert system technology
Article-I.D.: utcsrgv.1824

>From a recent headhunting flyer sent to some AAAI members:

"We have been retained by a major Financial Institution, located in
New York City.  They are interested in building the support staff for
their money market traders and are looking for qualified candidates
for the following positions:

    A Senior AI Researcher who has experience in knowledge rep'n
    and expert systems.  The ideal candidate would have a
    graduate degree in CS - AI with a Psychology (particularly
    cognitive processes), Cultural Anthropology, or comparable
    background.  This person will start by being a consultant in
    Human Factors and would interact between the Traders and the
    Systems they use.  Two new Xerox 1100 computers have been
    purchased and experience in LISP programming is necessary
    (with INTERLISP-D preferred).  This person will have their
    own personal LISP machine.  The goal of this position will
    be to analyze how Traders think and to build trading support
    (expert) systems geared to the individual Trader's style."

Two other job descriptions are given for the same project, for an
economist and an MBA with CS (database, communications, and systems)
and Operations Research background.

The fact that the co. would buy the 1100's without consulting their
future user and the tone of the description prompts me to wonder if
the co. is treating expert system technology as an engineering
discipline which can produce results in a relatively short order
rather than the experimental field it appears to be.  Particularly
troubling is the problem domain for this system--I would expect such
traders to make extensive use of knowledge about politics and economic
policy on a number of levels, not easy knowledge to represent.

I'm not an expert systems builder by any means and may be
underestimating the technology...  does anyone think this co. is not
expecting too much?  (Replies to the net, please)

[The company should definitely get copies of

  J.L. Stansfield, COMEX: A Support System for a Commodities Analyst,
  MIT AIM-423, July 1977.

  J.L. Stansfield, Conclusions from the Commodity Expert Project,
  MIT AIM-601, (AD-A097-854), Nov. 1980.

The latter, I hear, documents the author's experience with large,
incomplete databases of unreliable facts about a complex world.
It must be one of the few examples of an academic research project
that could not claim success.  -- KIL]

------------------------------

Date: Mon 25 Jul 83 02:45:51-EDT
From: Chip Maguire <Maguire@COLUMBIA-20.ARPA>
Subject: Re: Portable and More Efficient Lisps

        What I wish to generate is a discussion of what are the
features of LISP@-[n] which provide a nice/efficient/(other standard
virtues) environment for computer aided intellectual tasks (such as
AI, CAD, etc.).
        For example, quite a lot of the work that I have been involved
with recently required that from with a LISP environment that I
generate line drawings to represent: data structures, binding
environments for a multi-processor simulator, or even as a graphical
syntax for programming.  Thus, I would like to have 1) reasonable
support (in terms of packages of routines) for textual labels and line
drawings; and 2) this same package available irrespective of which
machine I happen to be using at the time [within the limits of the
hardware available].

        What other examples of common utilities are emerging as
"expected" `primitives'?  Chip

------------------------------

Date: Sat, 23 Jul 83 15:58:24 EDT
From: Stephen Slade <Slade@YALE.ARPA>
Subject: Portable and More Efficient Lisps

Chip Maguire took violent exception to the claim that T, a version of 
Scheme implemented at Yale, is "more efficient and portable" compared
to other Lisp implementations.  He then listed the numerous machines
on which PSL, developed at Utah, now runs.

The problem in this case is one of operator precedence:  "more" has
higher precedence than "and".  Thus, T is both portable AND more
efficient.  These two features are intertwined in the language design
and implementation through the use of lexical scoping and an
optimizing compiler which performs numerous source-to-source
optimizations.  Many of the compiler operations that depend on the
specific target machine are table driven.  For example, the register
allocation scheme clearly depends on the number and type of registers
available.  The actual code generator is certainly machine dependent,
but does not comprise a large portion of the compiler.  The compiler
is written largely in T, simplifying the task of porting the compiler
itself.

For PSL, portability was a major implementation goal.  For T,
portability became a byproduct of the language and compiler design.  A
central goal of T has been to provide a clean, elegant, and efficient
LISP.  The T implementers strove to achieve compatibility not only
among different machines, but also between the interpreted and
compiled code -- often a source of problems in other Lisps.  So far, T
has been implemented for the M68000 (Apollo/Domain), VAX/UNIX, and
VAX/VMS.  There are plans for other machine implementations, as well
as enhancements of the elegance and efficiency of the language itself.

People at Yale have been using T for the past several years now.  
Applications have included an extensible text editor with inductive 
inference capability (editing by example), a hierarchical digital
circuit graphics editor and simulator, and numerous large AI programs.
T is also being used in a great many undergraduate courses both at
Yale and elsewhere.

I believe that PSL and Standard LISP have been very worthwhile
endeavors and have bestowed the salutary light of LISP on many
machines that had theretofore languished in the lispless darkness of
algebraic languages.  T, though virtuous in design and virtual in
implementation, does not address the FORTRAN-heathen, but rather seeks
to uplift the converted and provide comfort to those true-believers
who know, in their heart of hearts, that LISP can embrace both
elegance and efficiency.  Should this credo also facilitate
portability, well, praise the Lord.

------------------------------

Date: Mon, 25 Jul 83 11:41:50 EDT
From: Nathaniel Mishkin <Mishkin@YALE.ARPA>
Subject: Re: Lisp Portability

    Date: Tue 19 Jul 83 15:24:00-EDT
    From: Chip Maguire <Maguire@COLUMBIA-20.ARPA>
    Subject: Lisp Portability

    [...]

    So lets hear more about the good ideas in T and fewer nebulous
    comments like:  "more efficient and portable".

I can give my experience working on a display text editor, U, written
in T. (U's original author is Bob Nix.)  U is 10000+ lines of T code.
Notable U features are a "do what I did" editing by example system, an
"infinite" undo facility, and a Laurel (or Babyl) -like mail system.
U runs well on the Apollo and almost well on VAX/VMS. U runs on
VAX/Unix as well as can be expected for a week's worth of work.
Porting U went well:  the bulk of U did not have to be changed.

- - - - -

Notable features of T:

    - T, like Scheme (from which T is derived) supports closures (procedures
      are first-class data objects).  Closures are implemented efficiently
      enough so that they are used pervasively in the implementation of the
      T system itself.

    - Variables are lexically-scoped; variables from enclosing scopes can
      be accessed from closed procedures.

    - T supports an object-oriented programming style that does not conflict
      with the functional nature of Lisp. Operations (like Smalltalk messages)
      can be treated as functions; e.g. they can be used with the MAP
      functions.

    - Compiled and interpreted T behave identically.

    - T has fully-integrated support for multiple namespaces so software
      written by different people can be combined without worrying about
      name conflicts.

    - The T implementors (Jonathan Rees and Norman Adams) have not felt
      constrained to hold on to some of the less modern aspects of older
      Lisps (e.g. hunks and irrational function names).

    - T is less of a bag of bits than other Lisps. T has a language definition
      and a philosophy.  One feels that one understands all of T after reading
      the manual.  The T implementors have resisted adding arbitrary features
      that do not fit with the philosophy.

    - Other features:  inline procedure expansion, procedures accept arbitrary
      numbers of parameters ("lexpr's" or "&rest-args"), interrupt processing.

All these aspects of T have proved to be very useful.

- - - - -

    The predecessor system "Standard LISP" along with the REDUCE
    symbolic algebra system ran on the following machines (as October
    1979):  Amdahl:  470V/6; CDC: 640, 6600, 7600, Cyber 76; Burroughs:
    B6700, B7700; DEC: PDP-10, DECsystem-10, DECsystem-20; CEMA: ES
    1040; Fujitsu:  FACOM M-190; Hitachi:  MITAC M-160, M-180;
    Honneywell:  66/60; Honeywell-Bull:  1642; IBM: 360/44, 360/67,
    360/75, 360/91, 370/155, 370/158, 370/165, 370/168, 3033, 370/195;
    ITEL: AS-6; Siemens:  4004; Telefunken:  TR 440; and UNIVAC: 1108,
    1110.

Hmm. Was the 370/168 implementation significantly different from the
370/158 implementation?  Also, aren't some of those Japanese machines
"360s".  When listing implementations, let's do it in terms of
architectures and operating systems.

While it may be the case that PSL is more portable than T, T does
presently run on the Apollo, VAX/VMS and VAX/Unix. Implementations for
other architectures are being considered.

------------------------------

End of AIList Digest
********************

∂28-Jul-83  0912	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #27
Received: from SRI-AI by SU-AI with TCP/SMTP; 28 Jul 83  09:11:29 PDT
Date: Wednesday, July 27, 1983 4:21PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #27
To: AIList@SRI-AI


AIList Digest           Thursday, 28 Jul 1983      Volume 1 : Issue 27

Today's Topics:
  Multiple producers in a production system
  PROLONG
  HFELISP
  Getting Started in AI
  Lisp Translation
  Re: Expectations of Expert System Technology
  The Fifth Generation Computer Project
  The Military and AI
  AI Koans
  HP Computer Colloquium 7/28
----------------------------------------------------------------------

Date: 26 Jul 1983 0937-PDT
From: Jay <JAY@USC-ECLC>
Subject: Multiple producers in a production system

(speculation/question)

Has anyone heard of multiple "producers" in production systems?  What
I mean is:  should the STM contain (a b c) and there is a rule (a b)
-> (d) and another (b c) -> (e), would it be useful to somehow do BOTH
productions?  The PS could become two PS's, one with (d c) and another
with (e a) in STM.  This sort of a PS could be useful in fuzzy areas 
of knowledge where the same implicants could (due to lack of other 
implicants, or due to lack of understanding) imply more than one 
result.

j'

------------------------------

Date: Tue 26 Jul 83 23:14:06-PDT
From: WALLACE <N.WALLACE@SU-SCORE.ARPA>
Subject: PROLONG

        PROLONG:  A VERY SLOW LOGIC PROGRAMMING LANGUAGE

                          ABSTRACT

PROLONG was developed at the University of Heiroglyphia over a 22-year
period.  PROLONG is an implementation of a very well-known technique
for deciding whether a given well-formed formula F of first-order
logic is a theorem.  We first type in the axioms A of our system.
Then PROLONG applies the rules of inference successively to the axioms
A and the subsequent theorems we derive from A.  A matching routine
determines whether F is identical to one of these theorems.  If the
algorithm stops, we know that F is a theorem.  If it never stops, we
known that F is not.

------------------------------

Date: 27 Jul 1983 0942-PDT
From: Jay <JAY@USC-ECLC>
Subject: HFELISP


        HFELISP (Heffer Lisp) HUMAN FACTOR ENGINEERED LISP

                                ABSTRACT

  HFE sugests that the more complicated features of (common) Lisp are 
dangerous, and hard to understand.  As a result a number of Fortran, 
Cobol, and 370 assembler programmers got together with a housewife.  
They pared Lisp down to, what we belive to be, a much simpler and more
understandable system.  The system includes only the primitives CONS, 
READ, and PRINT.  However CONS was restricted to only take an atom for
the first argument, and a onelevel list for the second.  Since all 
lists are onelevel they also did away with parenthesis.  All the 
primitives were coded in ADA and this new lisp is being considered as 
the DOD's AI language.

j'

------------------------------

Date: 22 Jul 83 15:39:24-PDT (Fri)
From: harpo!floyd!cmcl2!rocky2!flipkin @ Ucb-Vax
Subject: Getting Started in AI
Article-I.D.: rocky2.103

Can someone point me to a good place to begin with AI? I find the
subject fascinating (as does my EECS girlfriend), and I would
appreciate some help getting started. Thanks in advance,
                Dennis Moore

(reply via mail please, unless you think it is of great interest
to the net)

[I think it is of great interest!  I recommend the AI Handbook for a
general overview.  I am still looking for a good intro to Lisp and the
programming conventions needed to produce interesting Lisp programs.
(Winston and Horn is a reasonable introduction, and Charniak,
Riesbeck, and McDermott has a lot of good material.  The Little Lisper
is a good introduction to recursive programming if you can stand the
"programmed text" question-and-answer presentation.)
-- KIL]

------------------------------

Date: 26 Jul 1983 0833-PDT
From: FC01@USC-ECL
Subject: Lisp Translation

        This lisp debate seems to be turning into a freeforall.
Slanderous remarks are unnecessary. The fact is that once you get used
to something, the momentum of keeping with it is often more powerful
than any advantages attainable by changing from it. Perhaps functions
like transor from Interlisp could be extended by some of the AI
researchers to provide real translations from lisp to lisp. This way,
you could develop your programs in the lisp of your choice and run
them in the most efficient lisp available on any given machine. With
all the work that has been done on human translations and the extreme
complexity thereof, it would seem a practical and only extremely 
ambitious (as opposed to down right unrealistic) project to develop a
translator between lisps. Think of it like translating between a New
Yorker and a Bostonian and a Texan, all talking breeds of English. If
the energy spent on developing new lisps and arguing about their
superiorities were spent in the lisp translation area, we might have
it done by now.
                        Fred

------------------------------

Date: 25 Jul 83 18:11:37-PDT (Mon)
From: decvax!microsoft!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: Expectations of expert system technology
Article-I.D.: ssc-vax.345

Expert systems technology is an experimental field whose basic
concepts have been fairly well established in the past few years.
Since it is really an engineering field (knowledge engineering) much
of the important research is carried on by attempting to develop a
specific application and seeing what sorts of problems and solutions
crop up.  This is true for MYCIN, R1, PROSPECTOR, and many other
expert systems.  Our Expert Systems Technology group at Boeing has
been developing a prototype flight route planner.  It has provided a
good test bed for more theoretical work on the kinds of tools and
capabilities needed for knowledge engineering (although as a planner,
it may never be fully functional).  Our application is sufficiently
difficult that it is quite experimental, however a simple expert
system is not particularly difficult to put together, if some of the
existing and available tools are used.  Needless to say, many sweeping
generalizations and unjustified assumptions (read: gross hacks) must 
be made, in order to simplify the problem to a point where an expert 
system can be built.  The resulting expert system, although perhaps
not much more capable than a good C program, will be much smaller and 
more transparent in structure than any ordinary program.

The ad in question may or may not be reasonable.  I don't know enough 
about finance to say whether the knowledge in that domain can be 
easily encoded.  However, if the company's expectations are not too
high, they may end up with a reasonable tool, one that will be just as
good as if some C wizard had spent a year of sleepless nights 
reinventing the AI wheels.

Stan ("the Leprechaun Hacker") Shebs
Boeing Aerospace Co.
ssc-vax!sts (soon utah-cs)

------------------------------

Date: 26 Jul 83 10:50:26-PDT (Tue)
From: decvax!linus!utzoo!hcr!ravi @ Ucb-Vax
Subject: The Fifth Generation Computer Project
Article-I.D.: hcr.455

Has anyone out there had any contact with the Japanese Institute for
New Generation Computer Technology (which is running the Fifth
Generation Computer Project)?  Since the the first rush of publicity
when the project was initiated, things have been fairly quiet (except
for the somewhat superficial book by Feigenbaum and a few papers in
symposia), and it's a bit hard to find out just how the project is
progressing.  I am especially interested in talking to people who have
visited INGCT recently and have met with the people directly involved
in the project.  Thanks!
        --ravi

        {linus, floyd, allegra, ihnp4} ! utzoo ! hcr ! hcrvax ! ravi 
OR
        decvax ! hcr ! hcrvax ! ravi

------------------------------

Date: Wed, 27 Jul 83 08:42 EDT
From: MJackson.Wbst@PARC-MAXC.ARPA
Subject: The military and AI

Food for thought:


  Date: 26 Jul 83 12:05:02 PDT (Tuesday)
  From: McCullough.PA
  Subject: The military and AI
  To: antiwar↑ 

  From "The Race to Build a Supercomputer" in Newsweek, July 4, 1983...

  [Robert Kahn, mentioned below, is DARPA's computer director]


'Once they are in place, these technlogies will make possible an 
astonishing new breed of weapons and military hardware.  Smart robot 
weapons--drone aircraft, unmanned submarines and land vehicles--that 
combine aritificial intelligence and high-powered computing can be
sent off to do jobs that now involve human risk.  "This is a sexy area
to the military, because you can imagine all kinds of neat,
interesting things you could send off on their own little missions
around the world or even in local combat," says Kahn.  The Pentagon
will also use the technologies to create artificial-intelligence
machines that can be used as battlefield advisers and superintelligent
computers to coordinate complex weapons systems.  An intelligent
missile-guidance system would have to bring together different
technologies--real-time signal processing, numerical calculations and
symbolic processing, all at unimaginably high speeds--in order to make
decisions and give advice to human commanders.'

------------------------------

Date: 24 Jul 1983 16:21-PDT
From: greiner@Diablo
Subject: AI Koans

[This has appeared on several BBoards thanks to Gabriel Robins, Rich
Welty, Drew McDermott, Margot Flowers, and no doubt others.  I have
no idea what it is about, but pass it on for your doubtful
enlightenment.  -- KIL]


AI Koans: (by Danny)

  A novice was trying to fix a broken lisp machine by turning the
power off and on.  Knight, seeing what the student was doing spoke
sternly- "You can not fix a machine by just power-cycling it with no
understanding of what is going wrong."
  Knight turned the machine off and on.
  The machine worked.

-       -       -       -       -

One day a student came to Moon and said, "I understand how to make a
better garbage collector.  We must keep a reference count of the
pointers to each cons." Moon patiently told the student the following
story-

  "One day a student came to Moon and said, "I understand how to
  make a better garbage collector...


-       -       -       -       -

  In the days when Sussman was a novice Minsky once came to him as he
sat hacking at the PDP-6.  "What are you doing?", asked Minsky.
  "I am training a randomly wired neural net to play Tic-Tac-Toe."
  "Why is the net wired randomly?", asked Minsky.
  "I do not want it to have any preconceptions of how to play"
  Minsky shut his eyes,
  "Why do you close your eyes?", Sussman asked his teacher.
  "So the room will be empty."
  At that momment, Sussman was enlightened.


-       -       -       -       -

A student, in hopes of understanding the Lambda-nature, came to
Greenblatt.  As they spoke a Multics system hacker walked by.  "Is it
true", asked the student, "that PL-1 has many of the same data types
as Lisp".  Almost before the student had finished his question,
Greenblatt shouted, "FOO!", and hit the student with a stick.


-       -       -       -       -

A disciple of another sect once came to Drescher as he was eating his
morning meal.  "I would like to give you this personality test", said
the outsider,"because I want you to be happy." Drescher took the
paper that was offered him and put it into the toaster- "I wish the
toaster to be happy too".


-       -       -       -       -
(by who?)

A man from AI walked across the mountains to SAIL to see the Master,
Knuth.  When he arrived, the Master was nowhere to be found.

        "Where is the wise one named Knuth?" he asked a passing
student.

        "Ah," said the student, "you have not heard. He has gone on a
pilgrimage across the mountains to the temple of AI to seek out new
disciples."

Hearing this, the man was Enlightened.

-       -       -       -       -


And, of course, my own contribution:


A famous Lisp Hacker noticed an Undergraduate sitting in front of a
Xerox 1108, trying to edit a complex Klone network via a browser.
Wanting to help, the Hacker clicked one of the nodes in the network
with the mouse, and asked "what do you see?"
Very earnesty, the Undergraduate replied "I see a cursor."
The Hacker then quickly pressed the boot toggle at the back of the
keyboard, while simultaneously hitting the Undergraduate over the
head with a thick Interlisp Manual.  The Undergraduate was then
Enlightened.


         - Gabriel [Robins@ISIF]

------------------------------

Date: 26 Jul 83 14:10:41 PDT (Tuesday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium 7/28


                Professor Gio Wiederhold
                Department of Computer Science
                Stanford University

                Knowledge in Databases


We define knowledge-based approaches to database problems.

Using a clarification of application levels from the enterprise to the
system levels, we give examples of the varieties of knowledge which
can be used.  Most of the examples are drawn from work at the KBMS
project at Stanford.

The object of the presentation is to illustrate the power, and also
the high payoff of quite straightforward artificial intelligence 
applications in databases.  Implementation choices will also be 
evaluated.


        Thursday, July 28, 1983 4:00 pm

        5M Conference room
        HP Stanford Park Labs
        1501 Page Mill Rd
        Palo Alto

        *** Be sure to arrive at the building's lobby ON TIME, so that
you may be escorted to the conference room

------------------------------

End of AIList Digest
********************

∂29-Jul-83  1004	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #28
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Jul 83  10:04:18 PDT
Date: Friday, July 29, 1983 9:12AM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #28
To: AIList@SRI-AI


AIList Digest            Friday, 29 Jul 1983       Volume 1 : Issue 28

Today's Topics:
  USENET and AI
  AI and the Military
  The Fifth Generation Computer Project
  Lisp Books, Nondeterminism, Japanese Effort
  Automated LISP Dialect Translation
  Data Flow Computers and PS's
  Repeated Substring Detection
  A.I. in Sci Fi (2)
----------------------------------------------------------------------

Date: 26 Jul 83 11:52:01-PDT (Tue)
From: teklabs!jima @ Ucb-Vax
Subject: USENET and AI
Article-I.D.: teklabs.2247

In response to [a Usenet] query about AI research going on at USENET
sites:

The Tektronix Computer Research Lab now has a Knowledge-Based Systems 
group. We are a <very> new group and are still staffing up.  We're 
looking into circuit trouble shooting as well as numerous other topics
of interest.

Jim Alexander
Usenet: {ucbvax,decvax,pur-ee,ihnss,chico}!teklabs!jima
CSnet:  jima@tek
ARPA:   jima.tek@rand-relay

------------------------------

Date: Wed 27 Jul 83 21:29:44-PDT
From: Ira Kalet <IRA@WASHINGTON.ARPA>
Subject: AI and the military

The possibilities of AI in unmanned weapons systems are wonderful!  
Now we could send all the weapons, and their delivery vehicles to the 
moon (or beyond) where they can fight our war for us without anyone 
getting hurt and no property damage.  That would be progress!  If only
the decision makers valued us humans more than their toys..........

------------------------------

Date: 27 Jul 83 18:38:58 PDT (Wednesday)
From: Hamilton.ES@PARC-MAXC.ARPA
Subject: Re: The Fifth Generation Computer Project

In case some of you are not on every junk mailing list known to man
the way I am, there is a new international English-language journal
with an all-Japanese editorial board called "New Generation
Computing", published by Springer-Verlag, Journal Fulfillment Dept.,
44 Hartz Way, Secaucus, NJ 07094.  The price is even more outrageous
than the stuff published by North Holland:  vol.1 (2 issues) 1983,
$52; vol.2 (4 issues) 1984, $104.

Can anybody explain why so much AI literature (even by US authors) is 
published by foreign publishers at outrageous prices?  I should have 
thought some US univerity press would get smart and get into the act
in a bigger way.  Lawrence Erlbaum seems to be doing a creditable job
in Cognitive Science, but that's just one corner of AI.

--Bruce

------------------------------

Date: 29 Jul 1983 0838-PDT
From: FC01@USC-ECL
Subject: Re: Lisp Books, Nondeterminism, Japanese Effort

Lots of things to talk about today, A good lisp book for the beginner:
The LISP 1.6 Primer. It really explains what's going down, and even
has exercises with answers. It is not specific to any particular lisp
of today (since it is quite old) and therefore gives the general
knowledge necessary to use any lisp (with a little help from the
manual).

Nondeterministic production systems: Lots of work has been done. The 
fact is that a production system is built under the assumption that 
there is a single global database. The tree version of a production 
system doesn't meet this requirement. On the other hand, there are 
many models of what you speak of.  The Petri-net model treats such 
things nondeterministically by selecting one or the other (assuming 
their results prevent each other from occuring) seemingly at random.  
Of course, unless you have a real parallel processor the results you 
get will be deterministic. I refer you to any good book on Petri-nets 
(Peterson is pretty good). Tree structured algorithms in general have 
this property, therefore any breadth-first search will try to do both 
forks of the tree at once. Other examples of theorem provers doing 
this are relatively common (not to mention most multiprocess operating
systems based on forks).

%th generation computers: There is a lot of work on the same basic
idea as 5th generation computers (a 5th generation computer by any
other name sounds better). From what I have been able to gather from
reading all the info from ICOT (the Japanese project directorate) they
are trying to do the project by getting foreign experts to come and
tell them how. They anounce their project, say they're going to lead
the world, and wait for the egos of other scientists to bring them
there to show them how to really do it. The papers I've read show a
few good researchers with real good ideas but little in the way of
knowing how to get them working. On the other hand, data flow, speech
understanding, systolic arrays, microcomputer interfaces to
'supercomputers' and high BW communications are all operational to
some degree in the US, and are being improved on a daily basis. I
would therefore say that unless we show them how, we will be the
leaders in this field, not they.

***The last article was strictly my opinion-- no reflection on anyone
else***

                        Fred

------------------------------

Date: Thu, 28 Jul 83 11:34:17 CDT
From: Paul.Milazzo <milazzo.rice@Rand-Relay>
Subject: Automated LISP Dialect Translation

When Rice University got its first VAX, a friend of mine and I set 
about porting a production system based game playing program to Franz 
Lisp from Cambridge Lisp running on an IBM 370.  We used, as I recall,
a combination of Emacs macros (to change lexical constructs) and a
LISP program (to translate program constructs).  The technique was not
an elegant one, nor was it particularly general, but it gives me good 
reason to think that the LISP translator Fred proposes is far from 
impossible.  It also points out that implementation superiority is not
the only reason for choosing one LISP over another.

                                Paul Milazzo <milazzo.rice@Rand-Relay>
                                Dept. of Mathematical Sciences
                                Rice University, Houston, TX

:-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-)
P.S.  Fred:  After living in Texas for eight years, I'm still not
      sure I could interpret a Texan's remarks for a New Yorker.
      The dialect is easy to understand, but the concepts are all
      different...
:-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-)

------------------------------

Date: 28 Jul 1983 1352-PDT
From: Jay <JAY@USC-ECLC>
Subject: data flow computers and PS's

(more speculation)

 There has been some developement of computers suited to certain  high
level languages, includeing  the LISP machines.   There has also  been
some research  into non-Von  Neuman machines.   One  such  machine  is
the Data Flow Machine.

  The data flow machine differs from the conventional computer in that
ALL  instructions  are  initiated  when  the  program  starts.    Each
instruction waits  for  the  calculations yeilding  its  arguments  to
finish before it finishes.

  This machine seems,  to  me,  to be  ideally  suited  to  Production
Systems/Expert Systems.   Each  rule would  be  represented as  a  few
instructions (the IF part of the  production) and the THEN part  would
be represented by the completion of  the rule.  For example, the  rule
(Month-is-june AND Sun-is-up) ->  (Temperature-is-high) would be coded
as:

Temperature-is-high:    AND
                       /   \
                     /       \
                   /           \
          (Month-is-june)   (Sun-is-up)

  Where (Month-is-june) and (Sun-is-up) are represented as either
other rules, or as data (which I assume completes instantly).

j'

------------------------------

Date: Thu 28 Jul 83 16:06:46-PDT
From: David E.T. Foulser <FOULSER@SU-SCORE.ARPA>
Subject: Repeated Substring Detection

Would anyone in AI have use for the following type of program?  Given
a k-dimensional (the lower k the better) input string of characters 
from a finite alphabet, the program finds all substrings of dimension
k (or less if necessary) that occur more than once in the input
string.  I don't have a program that does this, but would like to know
of any interest.

                                        Sincerely,
                                        Dave Foulser

------------------------------

Date: 27 Jul 1983 1617-PDT
From: Park
Subject: A.I. in Sci Fi

                  [Reprinted from the SRI BBoard.]

Do you have a favorite gripe about the way scientists, computers, 
robots, or artificial intelligence are portrayed on tv shows?  Send 
them to me and I will forward them on Monday August 1 to an 
honest-to-God tv-show writer who is going to write that kind of show 
soon and would like to do it right.

Bill Park, EJ239 SRI International 333 Ravenswood Avenue Menlo Park,
CA 94025

------------------------------

Date: Thu 28 Jul 83 12:24:12-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Re: A.I. in Sci Fi

Gripes?  You mean things like:

  Hawaii 5-0 always using the card sorter as the epitome
  of computer readout?

  Stepford Wives portraying androids so realistic that no one
  notices, and executives/scientists who prefer them to true
  companions?

  Demon Seed showing impregnation of a woman by a computer?

  Telephon slowing down CRT typeout to 150 baud and adding
  Teletype sound effects?

  War Games similarly slowing the CRT typeout; using
  natural language communication; using voice synthesis
  on a home terminal connected by modem to a military computer;
  postulating that our national defense is in the hands of
  unsecured computers with dial-up ports, faulty password
  systems, games directories, and big panels of flashing lights;
  and portraying scientists and generals as nerds?

  Star Wars suggesting that computerized targeting mechanisms
  will always be inferior to human reflexes?

  Tron's premise that a computer can suck you into its internal
  conceptual world?

  Star Trek and War Games preaching that any computer can be
  disabled, even melted, by a logical contradiction or an
  unsatisfiable task?

Nah, I don't mind.

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************

∂29-Jul-83  1911	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #29
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Jul 83  19:11:05 PDT
Date: Friday, July 29, 1983 4:27PM
From: AIList (Kenneth Laws, Moderator) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #29
To: AIList@SRI-AI


AIList Digest           Saturday, 30 Jul 1983      Volume 1 : Issue 29

Today's Topics:
  Robustness stories, program logs wanted; reprise
  Job Ad: Research Fellowships at Edinburgh AI
  Job Ad: Research Associate/Programmer at Edinburgh AI
  Job Ad: Computing Officer at Edinburgh AI
----------------------------------------------------------------------

Date: 28 Jul 83 1631 EDT (Thursday)
From: Craig.Everhart@CMU-CS-A
Reply-to: Robustness@CMU-CS-A
Subject: Robustness stories, program logs wanted; reprise

Response to the blinded Robustness mailbox has been good, but not
quite good enough to do the trick.  If you have a robustness-related
story or a change log for a program, wouldn't you consider sending it
to my collection?  Thanks very much!

What I need is descriptions of robustness features--designs or fixes
that have made programs meet their users' expectations better, beyond
bug fixing.  E.g.:
        - An automatic error recovery routine is a robustness feature,
          since the user (or client) doesn't then have to recover by
          hand.
        - A command language that requires typing more for a dangerous
          command, or supports undoing, is more robust than one that
          has neither feature, since each makes it harder for the user
          to get in trouble.
There are many more possibilities.  Anything where a system
doesn't meet user expectations because of incomplete or ill-advised
design is fair game.

Your stories will be used to validate my PhD thesis at CMU, which is
an attempt to build a discrimination net that will aid system
designers and maintainers in improving their designs and programs.
All stories will be properly credited in the thesis.

Please send a description of the problem, including an idea of the
task and what was going wrong (or what might have gone wrong) and a
description of the design or fix that handled the problem.  Or, if you
know of a program change log and would be available to answer a
question or two on it, please send it.  I'll extract the reports from
it.

Please send stories and logs to Robustness@CMU-CS-A.  Send queries
about the whole process to Everhart@CMU-CS-A.  I appreciate it--thank
you!

------------------------------

Date: Wednesday, 27-Jul-83  17:34:36-BST
From: DAVE     FHL (on ERCC DEC-10)  <bowen@edxa>
Reply-to: bowen%edxa%ucl-cs@isid
Subject: Job Ad: Research Fellowships at Edinburgh AI

--------

                            University of Edinburgh
                     Department of Artificial Intelligence

                              2 RESEARCH FELLOWS

                               (readvertisement)

Applications are invited for two Research Fellow posts to join a
project, funded by the Science and Engineering Research Council, which
is concerned with developing methods of modelling the user of
knowledge-based training and aid systems.  Candidates, who should have
a higher degree in Computer Science, Mathematics, Experimental
Psychology or related discipline, should be experienced programmers
and familiar with UNIX.  Experience of PROLOG or LISP and some
knowledge of IKBS (Intelligent Knowledge Based Systems) techniques 
would be an advantage.

The posts are tenable for three years, starting 1 October 1983, on the
salary scale 7190 - 11160 pounds sterling.

Applications, which should include a curriculum vitae and the names of
two referees, should be sent to

            The Secretary's Office
            Old College
            South Bridge
            Edinburgh EH8 9YL
            Great Britain

(from whom further particulars can be obtained) by 27 August 1983.
Please quote reference 5106.

------------------------------

Date: Wednesday, 27-Jul-83  17:38:47-BST
From: DAVE     FHL (on ERCC DEC-10)  <bowen@edxa>
Reply-to: bowen%edxa%ucl-cs@isid
Subject: Job Ad: Research Associate/Programmer at Edinburgh AI

--------

                            University of Edinburgh
                     Department of Artificial Intelligence

                         RESEARCH ASSOCIATE/PROGRAMMER

Applications are invited for a post of Research Associate/Programmer,
to join a project, led by Dr Jim Howe and funded by the Science and
Engineering Research Council, which is concerned with the
interpretation of sonar data in a 3-D marine environment.  Candidates,
who should have a degree in Computer Science, Mathematics or related
discipline, should be conversant with the UNIX programming environment
and fluent in the C language.  The work involves programming
applications of statistical estimation, 3-D motion representation, and
rule-based inference; experience in one or more of these areas would
be an advantage.

The post is tenable for three years, starting 1 October 1983, on the
salary scale 6310 - 7190 pounds sterling.

Applications, which should include a curriculum vitae and the names of
two referees, should be sent to

            The Secretary's Office
            Old College
            South Bridge
            Edinburgh EH8 9YL
            Great Britain

(from whom further particulars can be obtained) by 27 August 1983.
Please quote reference 5107.

------------------------------

Date: Wednesday, 27-Jul-83  17:32:58-BST
From: DAVE     FHL (on ERCC DEC-10)  <bowen@edxa>
Reply-to: bowen%edxa%ucl-cs@isid
Subject: Job Ad: Computing Officer at Edinburgh AI

--------

                            University of Edinburgh
                     Department of Artificial Intelligence

                        DEPARTMENTAL COMPUTING OFFICER

Applications are invited for a post of Departmental Computing Officer.
The successful applicant will lead a small group which is responsible
for creating, maintaining and documenting systems and application
software as needed for research and teaching in Artificial
Intelligence, and for managing the department's computing systems
which run under Berkeley UNIX.  Candidates, who should have a degree
in Computer Science or related discipline, should be conversant with
UNIX and fluent in the C language..  A background in compiler design
or an interest in A.I. would be advantageous.

The post is salaried on the scale 7190 - 11615 pounds sterling, with
placement according to age and experience.

Applications, which should include a curriculum vitae and the names of
two referees, should be sent to

            The Secretary's Office
            Old College
            South Bridge
            Edinburgh EH8 9YL
            Great Britain

(from whom further particulars can be obtained) by 27 August 1983.
Please quote reference 7033.

------------------------------

End of AIList Digest
********************

∂02-Aug-83  1514	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #30
Received: from SRI-AI by SU-AI with TCP/SMTP; 2 Aug 83  15:12:22 PDT
Date: Tuesday, August 2, 1983 12:54PM
From: AIList (Moderator: Kenneth Laws) <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #30
To: AIList@SRI-AI


AIList Digest            Tuesday, 2 Aug 1983       Volume 1 : Issue 30

Today's Topics:
  Automatic Translation - Lisp to Lisp,
  Language Understanding - EPISTLE System,
  Programming Aids - High-Level Debuggers,
  Databases - Request for Geographic Descriptors,
  Seminars - Chess & Evidential Reasoning
----------------------------------------------------------------------

Date: Fri 29 Jul 83 15:53:59-PDT
From: Michael Walker <WALKER@SUMEX-AIM.ARPA>
Subject: Lisp Translators

[...]

        There has been some discussion about Lisp translation programs
the last couple of days. Another one to add to the list is that
developed by Gord Novak at Sumex for translating Interlisp into Franz,
Maclisp, UCILisp, and Portable Standard Lisp. I suspect Gord would 
have a pretty good idea about what else is available, as this seems to
be an area of interest of his.

                                Mike Walker

[Another resource might be the set of macros that Rodney Brooks 
developed to run his Maclisp ACRONYM system under Franz Lisp.
The Image Understanding Testbed at SRI uses this package.
-- KIL]

------------------------------

Date: 30 Jul 1983 07:10-PDT
From: the tty of Geoffrey S. Goodfellow
Reply-to: Geoff@SRI-CSL
Subject: IBM Epistle.


    TECHNOLOGY MEMO
    By Dan Rosenheim
    (c) 1983 Chicago Sun-Times (Independent Press Service)
    IBM is experimenting with an artificial intelligence program that 
may lead to machine recognition of social class, according to a 
research report from International Resource Development.
    According to the market research firm, the IBM program can
evaluate the style of a letter, document or memo and can criticize the
writing style, syntax and construction.
    The program is called EPISTLE (Evaluation, Preparation and 
Interpretation System for Text and Language Entities).
    Although IBM's immediate application for this technology is to 
highlight ''inappropriate style'' in documents being prepared by 
managers, IRD researchers see the program being applied to determine 
social origins, politeness and even general character.
     Like Bernard Shaw's Professor Higgins, the system will detect
small nuances of expression and relate them to the social background
of the originator, ultimately determining sex, age, level of
intelligence, assertiveness and refinement.
    Particularly intriguing is the possibility that the IBM EPISTLE 
program will permit a response in the mode appropriate to the user and
the occasion. For example, says IRD, having ascertained that a letter
had been sent by a 55-year-old woman of Armenian background, the
program could help a manager couch a response in terms to which the
woman would relate.

------------------------------

Date: 01 Aug 83  1203 PDT
From: Jim Davidson <JED@SU-AI>
Subject: EPISTLE


There's a lot of exaggeration here, presumably by the author of the
Sun-Times article.  EPISTLE is a legitimate project being worked on
at Yorktown, by George Heidorn, Karen Jensen, and others.  [See,
e.g., "The EPISTLE text-critiquing system". Heidorn et al, IBM
Systems Journal, 1982] Its general domain, as indicated, is business
correspondence.  Its stated (long-term) goals are

    (a) to provide support for the authors of business letters--
        critiquing grammar and style, etc.;

    (b) to deal with incoming texts: "synopsizing letter contents,
        highlighting portions known to be of interest, and
        automatically generating index terms based on conceptual
        or thematic characteristics rather than key words".

Note that part (b) is stated considerably less ambitiously than in
the Sun-Times article.

The current (as of 1982) version of the system doesn't approach even
these more modest goals.  It works only on problems in class (a)--
critiquing drafts of business letters.  The *only* things it checks
for are grammar (number agreement, pronoun agreement, etc.), and
style (overly complex sentences, inappropriate vocabulary, etc.)
Even within these areas, it's still very much an experimental system,
and has a long way to go.

Note in particular that the concept of "style" is far short of the
sort of thing presented in the Sun-Times article.  The kind of style
checking they're dealing with is the sort of thing you find in a
style manual: passive vs. active voice, too many dependent clauses,
etc.

------------------------------

Date: 28 Jul 1983 05:25:43-PST
From: whm.arizona@Rand-Relay
Subject: Debugger Query--Summary of Replies

                    [Reprinted from Human-Nets.]

Several weeks ago I posted a query for information on debuggers.  The 
information I received fell into two categories: information about 
papers, and information about actual programs.  The information about 
papers was basically subsumed by two documents: an annotated 
bibliography, and soon-to-be-published conference proceedings.  The 
information about programs was quite diverse and somewhat lengthy.  In
order to avoid clogging the digest, only the information about the 
papers is included here.  A longer version of this message will be 
posted to net.lang on USENET.

The basic gold mine of current ideas on debugging is the Proceedings 
of the ACM SIGSOFT/SIGPLAN Symposium on High-Level Debugging which was
held in March, 1983.  Informed sources say that it is scheduled to 
appear as vol. 8, no. 4 (1983 August) of SIGSOFT's Software 
Engineering Notes and as vol. 18, no. 8 (1983 August) of SIGPLAN 
Notices.  All members of SIGSOFT and SIGPLAN should receive copies 
sometime in August.

Mark Johnson at HP has put together a pair of documents on debugging.
They are:

        "An Annotated Software Debugging Bibliography"
        "A Software Debugging Glossary"

I believe that a non-annotated version of this bibliography appeared 
in SIGPLAN in February 1982.  The annotated bibliography is the basic 
gold mine of "pointers" about debugging.

Mark can be contacted at:

        Mark Scott Johnson
        Hewlett-Packard Laboratories
        1501 Page Mill Road, 3U24
        Palo Alto, CA 94304
        415/857-8719

        Arpa:  Johnson.HP-Labs@RAND-RELAY
        USENET: ...!ucbvax!hplabs!johnson


Two books were mentioned that are not currently included in Mark's 
bibliography:

        "Algorithmic Debugging" by Ehud Shapiro.  It has information
          on source-level debugging, debuggers in the language being
          debugged, debuggers for unconventional languages, etc.  It
          is supposedly available from MIT Press.  (From
          dixon.pa@parc-maxc)

        "Smalltalk-80: The Interactive Programming Environment"
           A section of the book describes the system's interactive
           debugger.  (This book is supposedly due in bookstores
           on or around the middle of October.  A much earlier
           version of the debugger was briefly described in the
           August 1981 BYTE.)  (From Pavel@Cornel.)

Ken Laws (Laws@sri-iu) sent me an extract from "A Bibliography of 
Automatic Programming" which contained a number of references on 
topics such as programmer's apprentices, program understanding, 
programming by example, etc.


Many thanks to those who took the time to reply.

                                Bill Mitchell
                                The University of Arizona
                                whm.arizona@rand-relay
                                arizona!whm

------------------------------

Date: Fri 29 Jul 83 19:32:39-PDT
From: Robert Amsler <AMSLER@SRI-AI.ARPA>
Subject: WANTED: Geographic Information Data Bases

I want to build a geographic knowledge base and wonder if
someone out there has small or large sets of foreign
geographic data. Something containing elements such as
(PARIS CITY FRANCE) composed of three items,
Geographic-Name, Superclass, and Containing-Geographic item.

I have already acquired a list of all U.S. cities and
their state memberships; but apart from that need other
geographic information for other U.S. features (e.g. counties,
rivers, mountains, etc.) as well as world-wide data.

I am not especially looking for numeric data (e.g. Longitude
and Latitude; elevations, etc.) nor numeric attributes such
as population, area, etc. -- I want symbolic data, names of
geographic entities.

Note::: I do mean already machine-readable.

Bob Amsler
Natural-Language and Knowledge-Resource Systems Group
Advanced Computer Systems Department
SRI International
333 Ravenswood Ave
Menlo Park, CA 95025

------------------------------

Date: 1 August 1983 1507-EDT
From: Dorothy Josephson at CMU-CS-A
Subject: CMU Seminar, 8/9

                  [Reprinted from the CMU BBoard.]

DATE:           Tuesday, August 9, 1983
TIME:           3:30 P.M.
PLACE:          Wean Hall 5409
SPEAKER:        Hans Berliner
TOPIC:          "Ken Thompson's New Chess Theorem"

                        ABSTRACT

Among the not-quite-so-basic endgames in chess is the one of 2
Bishops versus Knight (no pawns).  What the value of a general
position in this domain is, has always been an open question.  The
Bishops have a large advantage, but it was thought that a basic and
usually achievable position could be drawn.  Thompson has just shown
that this endgame is won in the general case using a technique called
retrograde enumeration.  We will explain what he did, how he did it,
and the significance of this result.  We hope some people from Formal
Foundations will attend as there are interesting questions relating
to whether a construction such as this should be considered a
"proof."

------------------------------

Date: 1 Aug 83 17:40:48 PDT (Monday)
From: murage.pa@PARC-MAXC.ARPA
Subject: HP Computer Colloquium, 8/4

                  [Reprinted from the SRI BBoard.]


                       JOHN D. LAWRENCE

                   Articifial Intelligence Center
                       SRI International


                       EVIDENTIAL REASONING:
           AN IMPLIMENTATION FOR MULTI-SENSOR INTEGRATION


One common feature of most knowledge-based expert systems is that
they must reason based upon evidential information. Yet there is very
little agreement on how this should be done. Here we present our
current understanding of this problem and its solution as it applies
to multi-sensor integration. We begin by characterizing evidence as a
body of information that is uncertain, incomplete, and sometimes
inaccurate. Based on this characterization, we conclude that
evidential reasoning requires both a method for pooling multiple
bodies of evidence to arrive at a consensus opinion and some means of
drawing the appropriate conclusions from that opinion. We contrast
our approach, based on a relatively new mathematical theory of
evidence, with those approaches based on Bayesian probability models.
We believe that our approach has some significant advantages,
particulary its ability to represent and reason from bounded
ignorance. Further, we describe how these techniques are implemented
by way of a long term memory and a short term memory.  This provides
for automated reasoning from evidential information at multiple
levels of abstraction over time and space.


   Thursday, August 4, 1983 4:00 p.m.

   5M Conference Room
   1501 Page Mill Road
   Palo Alto, CA 94304

   NON-HP EMPLOYEES:  Welcome! Please come to the lobby on time, so
that you may be escorted to the conference room.

------------------------------

End of AIList Digest
********************

∂02-Aug-83  2352	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #31
Received: from SRI-AI by SU-AI with TCP/SMTP; 2 Aug 83  23:50:57 PDT
Date: Tuesday, August 2, 1983 10:49PM
From: AIList Moderator: Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #31
To: AIList@SRI-AI


AIList Digest           Wednesday, 3 Aug 1983      Volume 1 : Issue 31

Today's Topics:
  Fifth Generation - Opinion & Book Review
----------------------------------------------------------------------

Date: Sat 30 Jul 83 21:39:16-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: 5th generation

I think that there is a widespread misconception on ICOT and the 5th
generation project. Here are my comments on a recent message to this
bulletin board:

    From what I have been able to gather from reading all the
    info from ICOT (the Japanese project directorate) they are
    trying to do the project by getting foreign experts to come
    and tell them how. They anounce their project, say they're
    going to lead the world, and wait for the egos of other
    scientists to bring them there to show them how to really do
    it.

I know personally several people that have visited ICOT, have talked 
at length with two of them and read trip reports by others. In their 
visits, there was very little if any suggestion tht they should 
participate on the day to day effort at ICOT or give detailed reports 
on their work. The character of the visits was very much that of an
academic visit where the visitor goes on doing his current work and
sees what the hosts are up to. They were also very open with their
(very concrete and under way) plans. The image of the ICOT worker
waiting axiously to be told what to do seems the opposite of reality,
and in fact they sometimes seem too busy with their own work to give
their visitors any more than the minimum courtous attention.  As far
as I can tell, the goal of the invitations is to foster goodwill and
understanding of ICOTs goals.

    The papers I've read show a few good researchers with real
    good ideas but little in the way of knowing how to get them
    working.

ICOT has a very clear plan of creating a line of successively faster 
and more sophisticated "inference machines".  The first, the personal 
sequential inference machine (PSI), a specialized Prolog machine, is
being built now, and there is no reason to believe that it will not be
completed in time. They are also doing research in parallel 
architectures and database machines.

    On the other hand, data flow, speech understanding, systolic
    arrays, microcomputer interfaces to 'supercomputers' and
    high BW communications are all operational to some degree in
    the US, and are being improved on a daily basis. I would
    therefore say that unless we show them how, we will be the
    leaders in this field, not they.

I have looked, and I know people who have looked much more carefully, 
at the usefulness of current fashions in parallel architectures for 
general deductive inference engines. The picture, unfortunately, is 
not brilliant.  Given that ICOT are comitted to logic programming and 
deductive mechanisms in general, there isn't that much that they could
borrow from that work. That is, they are taking genuine research 
risks. To explain fully why I think most current architectures are not
appropriate for logic programming/deduction would take me too far 
afield. I will just point out that logic programming/deduction involve
dealing with incompletely specified objects (terms with uninstantiated
variables) that can be specified further in many alternative ways (OR 
parallelism).  Implementation of this kind of parallelism in currently
BUILT architectures would involve either wholesale copying or a high 
cost in accessing variable bindings.

Fernando Pereira

------------------------------

Date: 01 Aug 83  1422 PDT
From: Jim Davidson <JED@SU-AI>
Subject: The Fifth Generation (book review)

BC-BOOK-REVIEW Undated By CHRISTOPHER LEHMANN-HAUPT c. 1983 N.Y. Times
News Service
    THE FIFTH GENERATION. Artificial Intelligence and Japan's Computer
Challenge to the World. By Edward A. Feigenbaum and Pamela McCorduck.
275 pages. Illustrated with diagrams. Addison-Wesley. $15.75.

    This isn't just another of those books that says Japan is better 
than we are and therefore is going to keep on whipping us in 
productivity. ''The Fifth Generation'' goes considerably further than 
that. It points with a trembling finger at Japan's commitment to 
produce within a decade a new generation of computers so immensely 
powerful that they will in effect constitute a new and revolutionary 
form of wealth.
    KIPS, these computers will be called, an acronym of knowledge 
information processing systems. They will exploit the recent 
speculation that intelligence, be it real or artificial, doesn't 
depend so much on the power to reason as it does on a ''messy bunch of
details, facts, rules of good guessing, rules of good judgment, and
experiential knowledge,'' as the authors put it. They will be so much
more powerful that where today's machines can handle 10,000 to 100,000
logical inferences per second, or LIPS, the next-generation computer
will be capable of 100 million to 1,000 million LIPS.
    These computers, if the Japanese succeed, will be able to interact
with people using natural language, speech and pictures. They'll 
transform talk into print and translate one language into another.  
Compared to today's machines, they'll be what automobiles are to 
bicycles. And because they'll raise knowledge to the status of what 
land, labor and capital once were, these machines will become ''an 
engine for the new wealth of nations.''
    Will the Japanese really pull this off, despite their supposed 
tendency to be ''copycats'' instead of innovators? The authors insist 
that this and other stereotypes are largely mythical; that every great
industrial nation must go through a phase of imitation. Sure, the
Japanese can do it. And even if they fail to fulfill their grand 
design, they'll likely achieve enough to make it pointless for any 
other nation to compete with them. Meanwhile, the United States will 
assume the role of ''the first great post-industrial agrarian 
society.''
    It's quite an awesome picture that Edward A. Feigenbaum and Pamela
McCorduck have painted. What's more, they have impressive credentials
- Feigenbaum as professor of computer science at Stanford University 
and a founder of TeKnowledge Inc., a pioneer knowledge engineering 
company; Mrs. McCorduck as a science-writer who teaches at Columbia 
and whose last book was a history of artificial intelligence called 
''Machines Who Think.'' And their Jeremiad is extremely well written, 
even quite witty in places. It's certainly more articulate by an order
of magnitude than ''In Search of Excellence,'' the book that defends
America's managerial potential and now sits atop the nonfiction
best-seller list.
    So what are we supposed to do in the face of this awesome
challenge?  The authors list various possibilities, such as joining up
with Japan or preparing for our future as the world's truck garden.
But what they'd really like to see is ''a national center for
knowledge technology'' - that is, ''a gathering up of all knowledge,''
''to be fused, amplified, and distributed, all at orders of magnitude,
difference in cost, speed, volume, and >>usefulness<< over what we
have now.''
    Let that be as it may. While ''The Fifth Generation'' makes a 
powerful case, there are those who believe that, between the 
Pentagon's Defense Advanced Research Projects Agency (DARPA) and 
several interindustry groups that have been formed, we have already 
been sufficiently aroused to compete in this new race for world 
leadership. (The Soviet Union, by the way, is out in left field, 
according to the authors.)
    Whether the apocalypse it foresees is real or not, ''The Fifth 
Generation'' is worthwhile reading. Pamela McCorduck is very good on 
the debate over the ability of the machines to think, concluding that 
the condemnation they have met has been largely political - amusingly 
similar to ''the reasons given in the nineteenth century to explain 
why women could never be the intellectual equals of men.'' Feigenbaum 
is fascinating on his firsthand impressions of the Japanese computer 
establishment. (Each of the co-authors becomes a character in the 
narrative when his or her specialty happens to come up.)
    Together they are lucid on what the fifth-generation machines will
be like. And there is the standard mind-bending section on future 
computer applications. I particularly like Mrs. McCorduck's vision of 
the geriatric robot. ''It isn't hanging about in the hopes of 
inheriting your money - nor of course will it slip you a little 
something to speed the inevitable. It isn't hanging about because it 
can't find work elsewhere. It's there because it's yours. It doesn't 
just bathe you and feed you and wheel you out into the sun when you 
crave fresh air and a change of scene, though of course it does all 
those things. The very best thing about the geriatric robot is that it
>>listens<<. 'Tell me again,' it says, 'about how wonderful-dreadful
your children are to you. Tell me again that fascinating tale of the
coup of '63. Tell me again ... ' And it means it.''

------------------------------

End of AIList Digest
********************

∂04-Aug-83  1211	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #32
Received: from SRI-AI by SU-AI with TCP/SMTP; 4 Aug 83  12:09:23 PDT
Date: Thursday, August 4, 1983 9:26AM
From: AIList Moderator: Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #32
To: AIList@SRI-AI


AIList Digest            Thursday, 4 Aug 1983      Volume 1 : Issue 32

Today's Topics:
  Graph Theory - Finding Clique Covers,
  Knowledge Representation - Textnet,
  Fifth Generation & Msc. - Opinion,
  Lisp - Revised Maclisp Manual & Review of IQLisp
----------------------------------------------------------------------

Date: 2 Aug 83 11:14:51 EDT  (Tue)
From: Dana S. Nau <dsn%umcp-cs@UDel-Relay>
Subject: A graph theory problem

The following graph theory problem has arisen in connection with some
AI research on computer-aided design and manufacturing:

    Let H be a graph containing at least 3 vertices and having no
    cycles of length 4.  Find a smallest clique cover for H.

If there were no restrictions on the nature of H, the problem would be
NP-hard, but given the restrictions, it's unclear what its complexity
is.  A couple of us here at Maryland have been puzzling over the
problem for a week or so, and haven't been able to reduce any known
NP-hard problem to it.  However, the fastest procedure we have found
to solve the problem takes exponential time in the worst case.

Does anyone know anything about the computational complexity of this
problem, or about possible procedures for solving it?

------------------------------

Date: 3 Aug 83 20:50:46 EDT  (Wed)
From: Randy Trigg <randy%umcp-cs@UDel-Relay>
Subject: Textnet

[Adapted from Human-Nets.  The organization and indexing
of knowledge are topics that should be of interest to the AI
community.  -- KIL]

Regarding the recent worldnet discussion, I thought I'd briefly 
describe my research and suggest how it might apply: My thesis work 
has been in the area of advanced text handlers for the online 
scientific community.  My system is called "Textnet" and shares much 
with both NLS/Augment and Hypertext.  It combines a hierarchical 
component (like NLS, though we allow and encourage multiple 
hierarchies for the same text) with the arbitrary linked network 
strategy of Hypertext.  The Textnet data structure resembles a 
semantic network in that links are typed and are valid manipulable 
objects themselves, as are "chunks" (nodes with associated text) and 
"tocs" (nodes capturing hierarchical info).

I believe that a Textnet approach is the most flexible for a national 
network.  In a distributed version of Textnet (distributing 
Hypertext/Xanadu has also been proposed), users create not only new 
papers and critiques of existing ones, but also link together existing
text (i.e., reindexing information), and build alternate
organizations.

There can be no mad dictator in such an information network.  Each 
user organizes the world of scientific knowledge as he/she desires.  
Of course, the system can offer helpful suggestions, notifying a user 
about new information needing to be integrated, etc.  But in this 
approach, the user plays the active role.  Rather than passively 
accepting information in whatever guise worldnet decides to promote, 
each must take an active hand in monitoring that part of the network 
of interest, and designing personalized search strategies for the 
rest.  (For example, I might decree that any information stemming from
a set of journals I deem absurd, shall be ignored.)  After all, any 
truly democratic system should and does require a little work from 
each member.

------------------------------

Date: 3 Aug 1983 0727-PDT
From: FC01@USC-ECL
Subject: Re: Fifth Generation

Several good points were made about the Japanese capabilities and
plans for 5th generation computers. I certainly didn't intend to say
that they weren't capable of building such machines, only that the
U.S. could easily beat them to it if the effort were deemed
worthwhile. I have to agree that the nature of systolic arrays is
quite different from the necessary architecture for inference engines,
but nevertheless for vision and speech applications, these arrays are 
quite clearly superior. I know of no other nation with a data flow
machine in operation (although the Japanese are most certainly working
on it). Virtually every theorem proving system in existence was
written in the U.S. All of this information was freely (and rightly in
my opinion) disseminated to the rest of the world. If we continue to
do the research and seek immediate profits at the expense of long term
development, there is no doubt in my mind that the Japanese will beat
us there. If on the other hand, we use our extreme expertise to make 
our development programs the best they can be, and don't make the same
mistake we made with robotics in the 70s, I feel we can build better
machines sooner.

        Lisp translators from interlisp to otherlisps seem very 
interesting to me. Perhaps someone could send me a pointer to an 
ARPA-net mailing address of the creator/maintainer of these programs.
To my knowledge, none operates w/out human assistance, but I could be 
wrong.  [Check with Hanson@SRI-IU for Rodney Brooks' Maclisp-to-Franz 
macro package.  It does not cover all features in Maclisp.  -- KIL]

        As to natural language translation using computers, it has
been tried for technical translation and has been quite succesful as a
dictionary. As of 5 years ago, there were no real translators beyond
this for natural language.  Perhaps this has changed drastically. It
is my guess that without a system capable of learning, true
translation will never be done. It is simply too much to expect that a
human expert would be able to embody all of the knowledge of a
language into a program. Perhaps 90% translation could be achieved in
a few years, and 99% could probably be here w/in 10 years (between
similar languages).

        Speech recognition can be quite effective for relatively small
vocabularies by a given speaker in a particular language.
Understanding speech is a considerably slower process, but has the
advantage of trying to make sense of the sounds. It is not probably
realistic to say that general purpose speech understanding systems in
multiple languages with multiple speakers using large vocabularies
will be operational at real time performance in the next 10 years.

        Vision systems have been researched considerably for limited
robotics applications. Context boundedness seems to have a great
effect on the sort of IO that humans do. It is certainly not clear
that real time vision systems capable of understanding large varieties
of environments will be operational w/in the next 10 years.

        These problems are not simply solved by having very large
quantities of processing power! If they were, 5th generation computers
would not be such a risk. Even if the goals are not met, the advances
due to a large R+D program such as ICOTs will certainly have many
technological spinoffs with a widespread effect on the world
marketplace. It has been a longstanding problem with AI research that
people who demonstrate its results and people who report on these 
demonstrations both stress the possibilities for the future rather
than the realities of today. In many cases, the misconceptions spread
through the scientific community as well as the general public. Even
many computer science 'experts' that I've met have vast misconceptions
about what the current systems can in fact do, have in fact done, and
can be easily expanded to do. In many cases, NP complete problems have
been approached through heuristic means. This certainly works in many
cases, but as the sizes of problems increase, it is not clear that
these heuristics will apply as handily. NP completeness cannot be 
gotten around in general by building bigger or faster computers.
Computer learning has only been approached by a few researchers, and
few people would be considered 'intelligent' if they couldn't learn
from their mistakes.

        It doesn't bother me to see Kirk destroy computers with his
illogical ways. I've personally blown away many operating systems
accidentally with my illogical ways, and don't expect that anyone will
ever be able to build a 'perfect' machine. It does bother me when
people look at that as more than fantasy and claim it as scientific
evidence. Just as the 'robots' that are run by remote control (kind of
like a radio controlled airplane) sometimes upset me when they fool
people into thinking they are autonomous and intelects.

                                Yet another flaming controversy
				starter by
                                        Fred

------------------------------

Date: 3 August 1983 15:04 EDT
From: Kent M. Pitman <KMP @ MIT-MC>
Subject: MIT-LCS TR-295: The Revised Maclisp Manual

They said it would never happen, but look for yourself...

                        The Revised Maclisp Manual
                             by Kent Pitman

                                Abstract

Maclisp is a dialect of Lisp developed at M.I.T.'s Project MAC (now
the MIT Laboratory for Computer Science) and the MIT Artificial
Intelligence Laboratory for use in artificial intelligence research
and related fields.  Maclisp is descended from Lisp 1.5, and many
recent important dialects (for example Lisp Machine Lisp and NIL) have
evolved from Maclisp.

David Moon's original document on Maclisp, The Maclisp Reference
Manual (alias the Moonual) provided in-depth coverage of a number of
areas of the Maclisp world. Some parts of that document, however, were
never completed (most notably a description of Maclisp's I/O system);
other parts are no longer accurate due to changes that have occurred
in the language over time.

This manual includes some introductory information about Lisp, but is 
not intended as tutorial. It is intended primarily as a reference 
manual; particularly, it comes in response to users' pleas for more 
up-to-date documentation. Much text has been borrowed directly from
the Moonual, but there has been a shift in emphasis. While the Moonual
went into greater depth on some issues, this manual attempts to offer
more in the way of examples and style notes.  Also, since Moon had
worked on the Multics implementation, the Moonual offered more detail
about compatibility between ITS and Multics Maclisp. While it is hoped
that Multics users will still find the information contained herein to
be useful, this manual focuses more on the ITS and TOPS-20
implementations since those were the implementation most familiar to
the author.

The PitMANUAL, draft #14 May 21, 1983
                                   Saturday Evening Edition

Keywords: Artificial Intelligence, Lisp, List Structure, Maclisp,
          Programming Language, Symbol Manipulation

Ordering Information:

        The Revised Maclisp Manual
        MIT-LCS TR-295, $13.10

        Publications
        MIT Laboratory for Computer Science
        545 Technology Square
        Cambridge, MA 02139

About 300 copies were made. I don't know how long they'll last.
--kmp

------------------------------

Date: 1 August 1983 1747-EDT
From: Jeff Shrager at CMU-CS-A
Subject: IQLisp for the IBM-PC


        A review of IQLisp (by Integral Quality, 1983).

                Compiled by Jeff Shrager
                    CMU Psychology
                      7/27/83

The following comments refer to IQLisp running on an IBM-PC XT/256K
(you tell IQLisp the host machine's memory size at startup).  I spent
two two-hour (approximately) sessions with IQLisp just going through
the manual and hacking various features.  Then I tried to implement a
small production system interpreter (which took another three hours).

I. Things that make IQLisp more attractive than other micro Lisp
   systems that I have seen.

  A. The general workspace size is much larger than most due to the
     IBM-PC XT's expanded capacity.  IQLisp can take advantage of the
     increased space and the manual explains in detail how memory
     can be rearranged to take advantage of different programming
     requirements.  (But, see II.G.) (See also, summary.)
  B. The Manual is complete and locally legible. (But see II.D.)
     The internal specifications manual is surprisingly clear and
     complete.
  C. There is a window package. (But the windows aren't implemented
     to scroll under one another so the feature is more-or-less
     useless.)
  D. There is a macro facility.  This feature is important to both
     speed and eventual implementation of a compiler. (But see II.B.)
     Note that the manual teaches the "correct" way to write
     fexprs -- i.e., with macros.
  E. It uses the 8087 FP coprocessor if one exists. (But see II.A.)
  F. Integer bignums are supported.
  G. Arrays are supported for various data types.
  H. It has good "simple" I/O facilities.
     1. Function key support.
     2. Single keystroke input.
     3. Read macros. (No print macros?)
     4. A (marginal) window facility.
     5. Multiple streams.
  I. The development package is a useful programming tool.
     1. Error recovery tools are well designed.
     2. A complete structure editor is provided. (But, see II.I.)
     3. Many useful macros are included (e.g, backquote).
  J. It seems to be reasonably fast.  (See summary.)
  K. Stack frame hacking functions are provided which permit error
     control and evaluations in different contexts. (But, see II.H.)
  L. There is a clean interface to DOS.  (The "DIR" function is
     especially useful and cleverly implemented.)


II. Negative aspects of IQLisp.  (* Things marked with a "*" indicate
    important deficiencies.)

**A. There is no compiler!
 *B. Floating point is not supported without the 8087.  One would
     think that some sort of even very slow FP would be provided.
 *C. Casing is completely backwards.  Uppercase is demanded by IQLisp
     which forces one to put on shift lock (in a bad place on the IBM
     PC).  If any case dependency is implemented it should be the
     opposite (i.e., demand lower case) but case sensitivity should
     be switch controllable -- and default OFF!
 *D. The manual is poorly organized.  It is very difficult to find
     a particular topic since there are no complete indexes and the
     topics are split over several different sections.
  E. Error recovery is sometimes poor.  I have had three or four
     occasions to reboot the PC because IQLisp had gone to lunch.
     Once this was because the 8087 was not present and I had told
     the system that it was.  I don't know what caused the other
     problems.
  F. The file system supports only sequential files.
  G. The stack is fixed at 64K maximum which isn't very much and
     permits only about 700 levels of binding-free recursion.
  H. No new features of larger Lisp systems are provided.  For
     example: closures, flavors, etc.  This is really not a
     reasonable complaint since we're talking 256K here.
  I. There is no screen editor for functions.


III. Summary.

I was disappointed by IQLisp but perhaps this is because I am still
dreaming of having a Lisp machine for under $5,000.  IQ has obviously
put a very large amount of effort into the system and its
documentation (the latter being at least as important as the former).

Although one does not have all the functionality of a Lisp machine in
IQLisp (or even nearly so) I think that they have done an admirable
job within the constraints of the IBM-PC.  Some of the features are
overkill (e.g, the window system which is pretty worthless in the way
provided and in a non-graphics environment.)

My production system was not the model of efficient PS hacking.  It
was not meant to be.  I wanted to see how IQLisp compared with our
Vax VMS Franz system.  I didn't use a RETE net or efficient memory
organization.  IQ didn't do very well against even a heavily loaded
Vax (also interpreted lisp code). The main problem was space, not
speed.  This is to be expected on a machine without virtual memory.
Since there are no indexed file capabilities in IQLisp, the user is
strictly limited by the available core memory. I think that it's
going to be some time before we can do interesting AI with a micro.
However, (1) I think that I could have rewritten my production system
to be much more efficient in both space and time.  It may have run
acceptably with some careful tuning (what do you want for three
hours!?). And (2) we are going to try to use the system in the near
future for some human-computer interaction experiments -- as a
single-subject workstation for learning Lisp.  I see no reason that
it should not perform acceptably in domains which are less
information intensive than AI.

The starred (*) items in section II above are major stumbling blocks
to using IQLisp in general.  Of these, it is the lack of a Lisp
compiler which stops me from recommending it to everyone.  I expect
that this will be corrected in the near future because they have all
the required underpinnings (macros, assembly interface, etc).  Why
don't people just write a simple little lisp system and a whizzy
compiler?

------------------------------

End of AIList Digest
********************

∂05-Aug-83  2115	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #33
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Aug 83  21:13:29 PDT
Date: Friday, August 5, 1983 5:13PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #33
To: AIList@SRI-AI


AIList Digest            Saturday, 6 Aug 1983      Volume 1 : Issue 33

Today's Topics:
  Automatic Translation - FRANZLATOR & Natural Language,
  Expert Systems - Survey Alert,
  Fifth Generation - Opinions,
  Computational Complexity - Parallelism,
  Distributed AI - Problem Solving Bibliography,
  Literature Sources - Requests,
  Workstations - Request,
  Job - Stanford Heuristic Programming Project
----------------------------------------------------------------------

Date: Thu, 4 Aug 83 12:06 EDT
From: Tim Finin <Tim%UPenn@UDel-Relay>
Subject: FRANZLATOR inter-dialect translation system


We have built a rule-driven lisp-to-lisp translation system
(FRANZLATOR) in Franz lisp and have used it to translate KL-ONE from
Interlisp to Franz. (We includes people here at Penn and at BBN and
CCA).  The system is modular so that modifying it to work with a
different source and target dialect should involve only changing
several data bases.

The translator is organized as a two-pass system which is applied to a
set of source-dialect files and produces a corresponding set of
target-dialect files and a set of files containing notes about the
translation (e.g.  possible errors).

During the first pass all of the source files are scanned to build up
a database of information about the functions defined in the file
(e.g. type of function, arity, how it evals its args).  In the second
pass the expressions in the source files are translated and the
results written to the target files. The translation of an
s-expression is driven by transformation rules applied according to an
"eval-order" schedule (i.e. the arguments to a function call are
translated before the call to the function itself). An additional
initial pass may be required to perform certain character-level
transformations, although this can often be done through the use of
multiple readtables.

The actual translation is done by a set of rewrite rules, each rule
taking an s-expression into one or more resultant s-expressions.  In
addition to the usual "pattern" and "result" parts, rules can be
easily augmented with arbitrary conditions and actions and can have
several other attributes which control their application (e.g. a
priority). Variables are represented using the "backquote" convention.
Example of rules for Interlisp->Franz are:
   (NIL nil)
   ((NLISTP ,x) (not (dtpr ,x)))
   ((PROG1 ,@args) (prog2 nil ,@args))
   ((DECLARE: ,@args) ,(translateDeclare: ,args))
   ((and ,@x (and ,@y) ,@z) (and ,@x ,@y ,@z) -cyclic)

The translation rules are presented to the system in the form
described above and are immediately "compiled" (by macro-expansion)
into Lisp code which is quite efficient and can be, of course, further
compiled by LISZT.  The pattern matching operation, for example, is
"open coded" into a conjuction of primitive tests and action (e.g. EQ,
EQUAL, LENGTH, SETQ).

If you are interested in more information, contact me.

- Tim at UPENN (csnet)

------------------------------

Date: Friday, 5 August 1983 12:43:04 EDT
From: Robert.Frederking@CMU-CS-CAD
Subject: Machine translation

        The thing that makes any kind of general purpose machine
translation extremely hard is that there generally aren't one-to-one
correspondences between words, phrases, or sometimes concepts in two
different human languages.  A real translator essentially reads and
understands the text in one language, and then generates the
appropriate text in the other language.  Since understanding general
texts requires huge amounts of real-world knowledge, unrestricted
machine translation will arrive about the time AI programs can pass
the Turing test.  In my opinion, this will be substantially longer
than ten years.

------------------------------

Date: Thu 4 Aug 83 09:25:16-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert Systems Summary

The August issue of IEEE Spectrum contains an article by William B.  
Gevarter (of NASA) titled "Expert Systems: Limited but Powerful".  The
table of existing expert systems shows 79 systems in 16 categories.  
The text includes brief descriptions of Dendral, Mycin, R1, and 
Internist.

                                        -- Ken Laws

------------------------------

Date: 4 Aug 83 8:56:21-PDT (Thu)
From: decvax!linus!philabs!ras @ Ucb-Vax
Subject: Re: Japanese Effort
Article-I.D.: philabs.27320

        Bully for you Fred! I also believe the Japanese do not have
        the know how nor the man-power to create such a machine.
        They make great memory devices but thats where it ends.

                                        Rafael Aquino !plabs

------------------------------

Date: Thu 4 Aug 83 13:41:13-PDT
From: Al Davis <ADavis at SRI-KL>
Subject: Re: Fifth Generation Book Review


As a frequent visitor to the Soviet Union, and regular reader of
Kibernetica, I don't get the feeling that the "Russians are out in
left field" - nor do I feel that the book is particularly
illuminating.  It is readable and provides some excellent insight to
the non-professional.  However the hype and reality is carefully
interwoven.  After all how professional is the "pointing of a
trembling finger at the Japanese".  Take your pick.

                                                Al Davis

                                                AI Architecture
                                                Fairchild AI Labs

------------------------------

Date: 4 Aug 1983 23:05:15-PDT
From: borgward.umn-cs@Rand-Relay
Subject: Re: Fifth Generation Computing

I do know of other nations with a data flow machine in operation.  
Gurd and Watson have one that works at Manchester in England.  I think
that the French LAU system also works.  Such lapses in attention are
what make Americans unpopular in Europe.  We also import a lot of AI
research from Europe and Prolog as well.

--Peter Borgwardt, University of Minnesota

------------------------------

Date: Fri 5 Aug 83 14:06:06-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: NP-completeness and parallelism

        In AIList V#32 Fred comments that "NP-completeness cannot be 
gotten around in general by building bigger or faster computers".  My
guess would be that parallelism may offer a way to reduce the order of
an algorithm, perhaps even to a polynomial order (using a machine with
"infinite parallel capacity", closely related to Turing's machine with
"infinite memory"). For example, I have heard of work developing 
sorting algorithms for parallel machines which have a lower order than
any known sequential algorithm.

        Perhaps more powerful machines are truly the answer to some of
our problems, especially in vision analysis and data base searching.  
Has anyone heard of a good book discussing parallel algorithms and 
reduction in problem order?

David Rogers

DRogers@SUMEX-AIM.ARPA

------------------------------

Date: Thu 4 Aug 83 17:41:01-PDT
From: Vineet Singh <vsingh@SUMEX-AIM.ARPA>
Subject: Distributed Problem Solving: An Annotated Bibliography

 For all of you who expressed interest in the annotated bibliography
on Distributed Problem Solving, here is some important information on
how to ftp a copy if you don't know this already.

The bibliography manuscript file "<vsingh.dps>dpsdis.bib" will be kept
on sumex-aim.arpa.  Please login as "anonymous" with password 
"sumexguest" (one word).

The file is by no means complete as you can see.  It will be 
continually updated.  You may notice that the file is prepared for 
Scribe formatting.

Please mail additional entries/annotations/corrections/suggestions to 
me and I will incorporate them in the file as soon as possible.  The 
turnaround time will be a lot shorter if the new entries are also in 
Scribe format.  If you know anything about Scribe, please save me a 
lot of effort and put your entries in Scribe format.

For those of you that did not see the original message, I have 
reproduced it below.

-------------------------------------------------------------------------------


This is to request contributions to an annotated bibliography of 
papers in *Distributed Problem-Solving* that I am currently compiling.
My plan is to make the bibliography available to anybody that is 
interested in it at any stage in its compilation.  Papers will be from
many diverse areas: Artificial Intelligence, Computer Systems 
(especially Distributed Systems and Multiprocessors), Analysis of 
Algorithms, Economics, Organizational Theory, etc.

Some miscellaneous comments.  My definition of distributed 
problem-solving is a very general one, namely "the process of many 
entities engaged in solving a problem", so feel free to send a 
contribution if you are not sure that a paper is suitable for this 
bibliography.  I also encourage you to make short annotations; more 
than 5 sentences is long.  All annotations in the bibliography will 
carry a reference to the author.  If your bibliography entries are in 
Scribe format that's great because the entire bibliography will be in 
Scribe.

Vineet Singh (VSINGH@SUMEX-AIM.ARPA)

------------------------------

Date: 1 Aug 83 4:22:03-PDT (Mon)
From: ihnp4!cbosgd!cbscd5!lvc @ Ucb-Vax
Subject: AI Journals
Article-I.D.: cbscd5.365

I am interested in subscribing to a computer science journal(s) that
deals primarily with artificial intelligence.  Could anyone that knows
of such journals send me via mail the names of these journals.  I will
post a list of all those sent my way.  Thanks in advance,

Larry Cipriani cbosgd!cbscd5!lvc

------------------------------

Date: 4 Aug 83 0:26:53-PDT (Thu)
From: hplabs!hp-pcd!jrf @ Ucb-Vax
Subject: AI~Geography
Article-I.D.: hp-pcd.1455



Please send info on what's available in Geography (PROSPECTOR,
cartography, etc.).  Thanks.

jrf

------------------------------

Date: 05 Aug 83  1417 PDT
From: Fred Lakin <FRD@SU-AI>
Subject: LISP & SUNs ...

I am interested in connectons between Franz LISP and SUN workstations.
Like how far along is Franz on the SUN?  Is there some package which
allows Franz on a VAX to use a SUN as a display device?  Also, now
that i think of it, any other LISP's which might run on both SUNs and
VAXes ...

Any info on this matter would be appreciated.  Thnaks, Fred Lakin

------------------------------

Date: Thu 4 Aug 83 09:57:01-PDT
From: Larry Fagan  <FAGAN@SUMEX-AIM.ARPA>
Subject: Programmer - ONCOCIN Project: Stanford Heuristic Programming
         Project

Programmer - ONCOCIN Project:  Stanford Heuristic Programming Project

        This position will involve applications programming for an 
oncology protocol management system known as ONCOCIN.  This project 
with Ted Shortliffe as principal investigator, represents an 
application of expert systems to the treatment of cancer patients, and
is currently in daily use by physicians.  The job requires significant
experience with artificial intelligence techniques and the LISP or
Interlisp languages.  The applicant must be willing to learn an
already existing, large expert system.  Masters level training in
computer science and previous experience with personal workstations
are highly desirable.  Although the tasks required will be varied, the
emphasis will be on artificial intelligence aspects of the oncology
research work:

*day-to-day management of the Interlisp programming efforts;
*participation in the design as well as the implementation of system
capabilities; *documentation of the system on an ongoing basis
(system overview/description as well as software documentation);
*supervisory coordination of students and part-time programmers who
may also be working on related projects; *assistance with occasional
non-programming matters important to the smooth running of the
project and to the efficient and effective performance of the system
in the clinical environment; *assistance with system demonstrations
for visitors and at meetings; *assistance with preparation of
portions of annual reports and funding proposals; *an ability to work
closely with the Chief Programmer, who will coordinate the Interlisp
efforts with other developing aspects of the total project.

Salary:  will follow Stanford University guidelines for Scientific 
Programmer III in accordance with the level of training and prior 
experience.

Contact: Larry Fagan, M.D., Ph.D.  (FAGAN@SUMEX)
         Project Director, ONCOCIN
         Stanford University Medical Center
         TC-117, Dept. of General Internal Medicine
         Stanford, Calif. 94305 (415)497-6979

------------------------------

End of AIList Digest
********************

∂08-Aug-83  1500	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #34
Received: from SRI-AI by SU-AI with TCP/SMTP; 8 Aug 83  14:59:40 PDT
Date: Monday, August 8, 1983 1:15PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #34
To: AIList@SRI-AI


AIList Digest             Monday, 8 Aug 1983       Volume 1 : Issue 34

Today's Topics:
  Fifth Generation - Opinion,
  Translation - Natural Language,
  Computational Complexity - Parallelism,
  LOGO - Request,
  Lab Descriptions - USENET Sites,
  Conferences - AAAI Panel to Honor Alexander Lerner
----------------------------------------------------------------------

Date: 5 Aug 83 20:14:19-PDT (Fri)
From: harpo!floyd!vax135!cornell!uw-beaver!ssc-vax!ditzel @ Ucb-Vax
Subject: Re: Japanese Effort
Article-I.D.: ssc-vax.377

Whereas it is true the Unites States holds a substantial lead in AI
over the Japanese, it really is beyond me how a person can believe
that they do not have the resources to overcome such a lead.  In my
*opinion* some things make a possible Japanese lead in AI machines 
possible.  Like:

*It is a national effort with an attempt to coordinate goals. The fact
that the project will be a coordinated effort rather than various 
incongruously related developments should facilitate compatibility
among the different topics.

*It may well be that Japan will have to go to the outside world to
make their project a success. What of it...a success is still a
success.

*In addition to believe that a priority project supported by both
government and industry will not try to encourage,educate and nurture
talented individuals toward the topics covered by the 5th generation
is not realistic.

*Worse yet, to believe such a project will not have an intense
political and social effect on Japan is also ignoring reality. If and
when successes in project goals do come, various segments of the
society and industrial sectors may begin to participate.

*The 5th generation project at least is visionary, a bit idealistic
and very ambitious. The outside 'egos' don't have an equivalent
project in the United States. (i.e.-one that has substantial backing
from industry and government *and* has fairly substantial financing
for the next five to ten years).

The point is we are very early into the project.... wait a bit.... we 
may learn a thing or two if we are not energetic enough.



                                            cld

------------------------------

Date: 5 Aug 83 14:50:43-PDT (Fri)
From: decvax!microsoft!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: Japanese Effort
Article-I.D.: ssc-vax.376

Concerning your lack of concern about the Japanese:

They may not have the manpower now, but they have been hiring outside 
Japan and giving some pretty strong support to their researchers.  I'd
go in a minute if they made me an offer...

                                        stan the leprechaun hacker
                                        ssc-vax!sts (soon utah-cs)

------------------------------

Date: 5 Aug 83 12:51:22-PDT (Fri)
From: decvax!linus!utzoo!watmath!echrzanowski @ Ucb-Vax
Subject: 5th generation computers
Article-I.D.: watmath.5613

I recently had an opportunity to show a visiting prof from the
University of Kyoto around our facilities. During one of our
conversations I asked him about the 5th generation computers in Japan.
His response was that it was only a large government promotional
campaign and nothing more.  Sure they are building some new computers
but not to the degree that we are expected to believe.


If anyone else has any ideas or comments on 5th generation computers I
would like to see them.


                                   (watmath)!echrzanowski

------------------------------

Date: 6 Aug 83 13:01:14-PDT (Sat)
From: decvax!genrad!mit-eddie!smh @ Ucb-Vax
Subject: Re: 5th generation computers
Article-I.D.: mit-eddie.551

About the professor from Kyoto who claimed that the 5th generation 
project was only a big government promotional effort:

Maybe so, maybe not.  Weren't there some similar gentlemen in
Washington making similar assurances about a different matter around 5
Dec 1948?

------------------------------

Date: Sat, 6 Aug 83 19:42 EDT
From: Tim Finin <Tim%UPenn@UDel-Relay>
Subject: natural language translation


    ... A real translator essentially reads and understands the
    text in one language, and then generates the appropriate
    text in the other language.  Since understanding general
    texts requires huge amounts of real-world knowledge,
    unrestricted machine translation will arrive about the time
    AI programs can pass the Turing test.  In my opinion, this
    will be substantially longer than ten years....

The long-standing machine translation project at the University of
Texas at Austin is not a system based on a deep understanding of the
text being translated yet has been giving good results in translating
technical manuals from German to English. Slocum reported on its
status in the ACL Conference on Applied Natural Language Processing
held in Santa Monica in February 83.  In this case, good meant
requiring less post-translation editing than the output of human
translators.

------------------------------

Date: 6 Aug 83 11:09:57 EDT  (Sat)
From: Craig Stanfill <craig%umcp-cs@UDel-Relay>
Subject: NP-completeness and parallelism

David Rogers commented that in parallel computing it makes sense to
assume a processor with an infinite number of processing elements,
much as a Turing machine has an infinite amount of memory.  He then
goes on to suggest that this might allow the effective solution of
NP-hard problems.

If we do this, we need to consider the processor-complexity of our
algorithms, not just the time-complexity.  For example, are there
algorithms for NP-hard problems which are linear in time but NP-hard
in the number of processors?  I suspect this is the case.

Parallelism is not the solution to combinatorial explosions; it is
just as limiting to use 2**n processors as it to use 2**n time.
However, the speedup is probably worth the effort; I would rather work
with a computer that uses 64,000 processors for one second than one
which uses 1 processor for 64,000 seconds.  Now, if we can just figure
out how to do this ...

------------------------------

Date: 7 Aug 83 16:57:17-PDT (Sun)
From: harpo!gummo!whuxlb!pyuxll!ech @ Ucb-Vax
Subject: Re: NP-completeness and parallelism
Article-I.D.: pyuxll.388

A couple of clarifications are in order here:

1. NP-completeness of a problem means, among other things, that the
   best known algorithm for that problem has exponential
   worst-case running time on a serial processor.  That is not
   intended as a technical definition, just an operational one.
   Moreover, all NP-complete problems are related by the fact
   that if a polynomial-time algorithm is ever discovered for
   any of them, then there is a polynomial-time algorithm for
   all, so the (highly oversimplified!) definition of
   NP-complete, as of this date, is "intrinsically exponential."

2. Perhaps obvious, but I will say so anyway: n processors yoked in
   parallel can't do better than to be n times faster than a
   single serial processor. For some problems (e.g. sorting),
   the speedup is less.

The bottom line is that the "biggest tractable problem" is
proportional to the log of the computing power at your disposal;
whether you increase the power by speeding up a serial processor or by
multiplying the number of processors is of small consequence.

Now for the silver lining.  NP-complete problems often can be tweaked 
slightly to yield "easy" problems; if you have an NP-complete problem 
on your hands, go back and see if you can restrict it to a more
readily soluble problem.

Also, one can often restrict to a subproblem which, while it is still 
NP-complete, has a heuristic which generates solutions which aren't 
too far from optimal.  An example of this is the Travelling Salesman 
Problem.  Several years ago Lewis, Rosencrantz, and Stearns at GE
Research described a heuristic that yielded solutions that were no
worse than twice the optimum if the graph obeyed the triangle 
inequality (i.e. getting from A to C costs no more than going from A
to B, then B to C), a perfectly reasonable constraint.  It seems to me
that the heuristic ran in O(n-squared) or O(n log n), but my memory
may be faulty; low-order polynomial in any case.

So: "think parallel" may or may not help.  "Think heuristic" may help
a lot!

=Ned=

------------------------------

Date: 5 Aug 83 17:56:34-PDT (Fri)
From: harpo!eagle!allegra!jdd @ Ucb-Vax
Subject: LOGO wanted
Article-I.D.: allegra.1721

A colleague of mine is looking for an implementation of LOGO, or any
similar language, under UNIX (one that already ran on both Suns and
PDP-11/23's would be ideal, but fat chance of that, eh?).  Failing
that, she would like to find a reasonably portable version (e.g., in
MacLisp).  In any case, if you have suggestions, please send them to
me and I shall forward.

Cheers, John ("This Has Been A Public Service Announcement")
DeTreville Bell Labs

------------------------------

Date: 5 Aug 83 13:15:42-PDT (Fri)
From: decvax!linus!utzoo!utcsrgv!kramer @ Ucb-Vax
Subject: Re: USENET and AI
Article-I.D.: utcsrgv.1898

We at the University of Toronto have a strong AI group that has been
in action for years:

        Area                       Major Project

  Knowledge Representation     PSN (procedural semantic network)

  Databases and Knowledge      TAXIS Representation

  Vision                       ALVEN (left ventricular motion understanding)

  Linguistics                  Speech acts


A major summary of our activities is being prepared to appear in the 
magazine for AAAI at some point.

Our research is being done on VAXen under UNIX*.  Presently at
utcsrgv, we will soon (September) be moving to a VAX dedicated to ai
work.

------------------------------

Date: 6 Aug 83 13:40:14-PDT (Sat)
From: ihnp4!houxm!hocda!spanky!burl!duke!unc!bts @ Ucb-Vax
Subject: More AI on USENET only
Article-I.D.: unc.5673

     The Computer Science Department at UNC-Chapel Hill is another
site with (some) AI interests that is on USENET but not ARPANET.  We
are one of CSNET's phone sites, but this still doesn't allow us to FTP
files. (Yes, in part, this is a plea for those folks who can FTP to
share with the rest of us on USENET!)

     Our functional programming group has a couple of projects with
some AI overtones.  We have begun to look at AI style programming
languages for Gyula Mago's string reduction tree-machine.  This is a
small-grain parallel computer which executes Backus' FFP language.
We're also looking at automatic FP program transformations.

     Along with our neighbors at Duke University, we have some Prolog
programmers.  Right now, that's C-Prolog at UNC and NU7 UNIX Prolog at
Duke.

        Bruce Smith, UNC-Chapel Hill
        duke!unc!bts (USENET)
        bts.unc@udel-relay (other NETworks)

------------------------------

Date: 5 Aug 83 15:11:37 EDT  (Fri)
From: JACK MINKER <minker%umcp-cs@UDel-Relay>
Subject: AAAI Panel to Honor Alexander Lerner

        In conjunction with the AAAI meeting in Washington, D.C. a
session is being held to honor the 70th birthday of the Soviet
cyberneticist, Professor Alexander Lerner. The session will be held
on:

                Date: Tuesday, August 23, 1983
                Time: 7:00 PM
                Location: Georgetown Room, Concourse Level

        The session will consist of a brief description of Dr.
Lerner's career, followed by a panel discussion on:

                Future Directions in Artificial Intelligence

The following have agreed to be on the panel with me:

                Nils Nilsson
                John McCarthy
                Patrick Winston

Others will be invited to participate in the panel session.

        We hope that you will be able to join us to honor this
distinguished scientist.


                Jack Minker
                University of Maryland

------------------------------

End of AIList Digest
********************

∂09-Aug-83  1920	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #35
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Aug 83  19:20:06 PDT
Date: Tuesday, August 9, 1983 10:00AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #35
To: AIList@SRI-AI


AIList Digest            Tuesday, 9 Aug 1983       Volume 1 : Issue 35

Today's Topics:
  Expert Systems - Bibliography,
  Learning - Bibliography,
  Logic - Bibliography
----------------------------------------------------------------------

Date: Tue 9 Aug 83 09:04:09-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Bibliographies

The bibliographies in this and the following three issues were
extracted from the new-reports list put out by the Stanford Math/CS
Library.  I have sorted the citations as best I could from just the
titles.  Reports on planning and problem solving have not been pulled
out separately--they are listed here either by application domain
or by technique.

                                        -- Ken Laws

------------------------------

Date: Tue 9 Aug 83 08:44:04-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Expert Systems Bibliography

This is an update to the titles previously reported in AIList.

J.S. Aikins, J.C. Kunz, E.H. Shortliffe, and R.J. Fallat, PUFF: An
Expert System for Interpretation of Pulmonary Function Data.  Stanford
U. Comp. Sci. Dept., STAN-CS-82-931; Stanford U. Comp. Sci. Dept.
Heuristic Programming Project, HPP-82-013, 1982.  21p.

C. Apte, Expert Knowledge Management for Multi-Level Modelling.  
Rutgers U. Comp. Sci. Res. Lab., LCSR-TR-41, 1982.

B.G. Buchanan and R.O. Duda, Principles of Rule Based Expert Systems.
Stanford U. Comp. Sci. Dept., STAN-CS-82-926; Stanford U. Comp. Sci.
Dept. Heuristic Programming Project, HPP-82-014, 1982.  55p.

B.G. Buchanan, Partial Bibliography of Work on Expert Systems.  
Stanford U. Comp. Sci. Dept., STAN-CS-82-953; Stanford U. Comp. Sci.
Dept. Heuristic Programming Project, HPP-82-30, 1982.  13p.

A. Bundy and B. Silver, A Critical Survey of Rule Learning Programs.  
Edinburgh U. A.I. Dept., Res. Paper 169, 1982.

R. Davis, Expert Systems: Where are We? And Where Do We Go from Here?
M.I.T. A.I. Lab., Memo 665, 1982.

D. Dellarosa and L.E. Bourne, Jr., Text-Based Decisions: Changes in
the Availability of Facts due to Instructions and the Passage of
Time.  Colorado U. Cognitive Sci. Inst., Tech.  rpt. 115-ONR, 1982.

T.G. Dietterich, B. London, K. Clarkson, and G. Dromey, Learning and
Inductive Inference (a section of the Handbook of Artificial
Intelligence, edited by Paul R.  Cohen and Edward A. Feigenbaum).  
Stanford U. Comp. Sci. Dept., STAN-CS-82-913; Stanford U. Comp. Sci.
Dept. Heuristic Programming Project, HPP-82-010, 1982.  215p.

G.A. Drastal and C.A. Kulikowski, Knowledge Based Acquisition of Rules
for Medical Diagnosis.  Rutgers U. Comp. Sci. Res. Lab., CBM-TM-97, 
1982.

N.V. Findler, An Expert Subsystem Based on Generalized Production
Rules.  Arizona State U. Comp. Sci. Dept., TR-82-003, 1982.

N.V. Findler and R. Lo, A Note on the Functional Estimation of Values
of Hidden Variables--An Extended Module for Expert Systems.  Arizona
State U. Comp. Sci.  Dept., TR-82-004, 1982.

K.E. Huff and V.R. Lesser, Knowledge Based Command Understanding: An
Example for the Software Development Environment. Massachusetts U.
Comp. & Info. Sci. Dept., COINS Tech.Rpt. 82-06, 1982.

J.K. Kastner, S.M. Weiss, and C.A. Kulikowske, Treatment Selection and
Explanation in Expert Medical Consultation: Application to a Model of
Ocular Herpes Simplex.  Rutgers U. Comp.  Sci. Res. Lab., CBM-TR-132,
1982.

R.M. Keller, A Survey of Research in Strategy Acquisition.  Rutgers U.
Comp. Sci. Dept., DCS-TR-115, 1982.

V.E. Kelly and L.I. Steinberg, The Critter System: Analyzing Digital
Circuits by Propagating Behaviors and Specifications. Rutgers U.
Comp. Sci. Res. Lab., LCSR-TR-030, 1982.

J.J. King, An Investigation of Expert Systems Technology for
Automated Troubleshooting of Scientific Instrumentation.  Hewlett
Packard Co. Comp. Sci. Lab., CSL-82-012; Hewlett Packard Co. Comp.
Res.  Center, CRC-TR-82-002, 1982.

J.J. King, Artificial Intelligence Techniques for Device
Troubleshooting.  Hewlett Packard Co. Comp. Sci. Lab., CSL-82-009; 
Hewlett Packard Co. Comp. Res. Center, CRC-TR-82-004, 1982.

G.M.E. Lafue and T.M. Mitchell, Data Base Management Systems and
Expert Systems for CAD.  Rutgers U. Comp. Sci. Res. Lab., LCSR-TR-028,
1982.

R.J. Lytle, Site Characterization using Knowledge Engineering -- An
Approach for Improving Future Performance.  Cal U. Lawrence Livermore
Lab., UCID-19560, 1982.

T.M. Mitchell, P.E. Utgoff, and R. Banerji, Learning by
Experimentation: Acquiring and Modifying Problem Solving Heuristics.
Rutgers U. Comp. Sci. Res. Lab., LCSR-TR-31, 1982.

D.S. Nau, Expert Computer Systems, Computer, Vol. 16, No. 2, pp.
63-85, Feb. 1983.

D.S. Nau, J.A. Reggia, and P. Wang, Knowledge-Based Problem Solving
Without Production Rules, IEEE 1983 Trends and Applications Conf., pp.
105-108, May 1983.

P.G. Politakis, Using Empirical Analysis to Refine Expert System
Knowledge Bases.  Rutgers U. Comp. Sci. Res. Lab., CBM-TR-130, Ph.D.
Thesis, 1982.

J.A. Reggia, P. Wang, and D.S. Nau, Minimal Set Covers as a Model for
Diagnostic Problem Solving, Proc. First IEEE Comp. Soc. Int. Conf. on
Medical Computer Sci./Computational Medicine, Sept. 1982.

J.A. Reggia, D.S. Nau, and P. Wang, Diagnostic Expert Systems Based on
a Set Covering Model, Int. J. Man-Machine Studies, 1983.  To appear.

M.D. Rychener, Approaches to Knowledge Acquisition: The Instructable
Production System Project.  Carnegie Mellon U. Comp. Sci. Dept.,
1981.

R.D. Schachter, An Incentive Approach to Eliciting Probabilities.  
Cal. U., Berkeley. O.R. Center, ORC 82-09, 1982.

E.H. Shortliffe and L.M. Fagan, Expert Systems Research: Modeling the
Medical Decision Making Process.  Stanford U. Comp. Sci. Dept., 
STAN-CS-82-932; Stanford U. Comp. Sci. Dept. Heuristic Programming
Project, HPP-82-003, 1982.  23p.

M. Suwa, A.C. Scott, and E.H. Shortliffe, An Approach to Verifying
Completeness and Consistency in a Rule Based Expert System.  Stanford
U. Comp. Sci. Dept., STAN-CS-82-922, 1982.  19p.

J.A. Wald and C.J. Colbourn, Steiner Trees, Partial 2-Trees, and
Minimum IFI Networks.  Saskatchewan U. Computational Sci. Dept., Rpt.
82-06, 1982.

J.A. Wald and C.J. Colbourn, Steiner Trees in Probabilistic Networks.
Saskatchewan U. Computational Sci. Dept., Rpt. 82-07, 1982.

A. Walker, Automatic Generation of Explanations of Results from
Knowledge Bases.  IBM Watson Res. Center, RJ 3481, 1982.

J.W. Wallis and E.H. Shortliffe, Explanatory Power for Medical Expert
Systems: Studies in the Representation of Causal Relationships for
Clinical Consultation.  Stanford U. Comp.  Sci. Dept.,
STAN-CS-82-923, 1982.  37p.

S. Weiss, C. Kulikowske, C. Apte, and M. Uschold, Building Expert
Systems for Controlling Complex Programs.  Rutgers U. Comp. Sci. Res.
Lab., LCSR-TR-40, 1982.

Y. Yuchuan and C.A. Kulikowske, Multiple Strategies of Reasoning for
Expert Systems.  Rutgers U. Comp. Sci. Res. Lab., CBM-TR-131, 1982.

------------------------------

Date: Tue 9 Aug 83 08:47:25-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Learning Bibliography

Anderson, J.R. Farrell, R. Sauers, R.* Learning to plan in LISP.* 
Carnegie Mellon U. Psych.Dept.*1982.

Barber, G.*Supporting organizational problem solving with a 
workstation.* M.I.T. A.I. Lab.*Memo 681.*1982.

Bundy, A. Silver, B.*A critical survey of rule learning programs.* 
Edinburgh U. A.I. Dept.*Res. Paper 169.*1982.

Carbonell, J.G.* Learning by analogy: formulating and generalizing 
plans from past experience.* Carnegie Mellon U.  
Comp.Sci.Dept.*CMU-CS-82-126.*1982.

Carroll, J.M. Mack, R.L.* Metaphor, computing systems, and active 
learning.* IBM Watson Res. Center.*RC 9636.*1982.  schemes.* IBM 
Watson Res. Center.*RJ 3645.*1982.

Cohen, P.R.* Planning and problem solving.* Stanford U.  
Comp.Sci.Dept.*STAN-CS-82-939; Stanford U. Comp.Sci.Dept.  Heuristic 
Programming Project.*HPP-82-021.*1982.  61p.

Dellarosa, D. Bourne, L.E. Jr.*Text-based decisions: changes in the 
availability of facts due to instructions and the passage of time.* 
Colorado U. Cognitive Sci.Inst.* Tech.rpt. 115-ONR.*1982.

Ehrlich, K. Soloway, E.*An empirical investigation of the tacit plan 
knowledge in programming.* Yale U.  Comp.Sci.Dept.*Res.Rpt.  
236.*1982.

Findler, N.V. Cromp, R.F.*An artificial intelligence technique to 
generate self-optimizing experimental designs.* Arizona State U.  
Comp.Sci.Dept.*TR-83-001.* 1983.

Good, D.I.* Reusable problem domain theories.* Texas U.  Computing 
Sci.Inst.*TR-031.*1982.

Good, D.I.* Reusable problem domain theories.* Texas U.  Computing 
Sci.Inst.*TR-031.*1982.

Kautz, H.A.*A first-order dynamic logic for planning.* Toronto U.  
Comp. Systems Res. Group.*CSRG-144.*1982.

Luger, G.F.*Some artificial intelligence techniques for describing 
problem solving behaviour.* Edinburgh U. A.I.  Dept.*Occasional Paper 
007.*1977.

Mitchell, T.M. Utgoff, P.E. Banerji, R.* Learning by experimentation:
acquiring and modifying problem solving heuristics.* Rutgers U.  
Comp.Sci.Res.Lab.*LCSR-TR-31.* 1982.

Moura, C.M.O. Casanova, M.A.* Design by example (preliminary report).*
Pontificia U., Rio de Janeiro.  Info.Dept.*No. 05/82.*1982.

Nadas, A.*A decision theoretic formulation of a training problem in 
speech recognition and a comparison of training by uncondition versus 
conditional maximum likelihood.* IBM Watson Res. Center.*RC 
9617.*1982.

Slotnick, D.L.* Time constrained computation.* Illinois U.  
Comp.Sci.Dept.*UIUCDCS-R-82-1090.*1982.

Tomita, M.* Learning of construction of finite automata from examples 
using hill climbing.  RR: regular set recognizer.* Carnegie Mellon U.
Comp.Sci.Dept.* CMU-CS-82-127.*1982.

Utgoff, P.E.*Acquisition of appropriate bias for inductive concept 
learning.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TM-02.*1982.

Winston, P.H. Binford, T.O. Katz, B. Lowry, M.* Learning physical 
descriptions from functional definitions, examples, and precedents.* 
M.I.T. A.I. Lab.*Memo 679.* 1982.

Winston, P.H.* Learning by augmenting rules and accumulating censors.*
M.I.T. A.I. Lab.*Memo 678.*1982.

------------------------------

Date: Tue 9 Aug 83 08:48:00-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Logic Bibliography

Ballantyne, M. Bledsoe, W.W. Doyle, J. Moore, R.C. Pattis, R.  
Rosenschein, S.J.* Automatic deduction (Chapter XII of Volume III of 
the Handbook of Artificial Intelligence, edited by Paul R. Cohen and 
Edward A. Feigenbaum).* Stanford U. Comp.Sci.Dept.*STAN-CS-82-937; 
Stanford U.  Comp.Sci.Dept. Heuristic Programming 
Project.*HPP-82-019.* 1982.  64p.

Bergstra, J. Chmielinska, A. Tiuryn, J.*" Hoare's logic is not 
complete when it could be".* M.I.T. Lab. for Comp.Sci.*TM-226.*1982.

Bergstra, J.A. Tucker, J.V.* Hoare's logic for programming languages 
with two data types.* Mathematisch Centrum.*IW 207/82.*1982.

Boehm, H.-J.*A logic for expressions with side-effects.* Cornell U.  
Comp.Sci.Dept.*Tech.Rpt. 81-478.*1981.

Bowen, D.L. (ed.)* DECsystem-10 Prolog user's manual.* Edinburgh U.  
A.I. Dept.*Occasional Paper 027.*1982.

Boyer, R.S. Moore, J.S.*A mechanical proof of the unsolvability of the
halting problem.* Texas U. Computing Sci. and Comp.Appl.Inst.  
Certifiable Minicomputer Project.*ICSCA-CMP-28.*1982.

Bundy, A. Welham, B.*Utility procedures in Prolog.* Edinburgh U. A.I.
Dept.*Occasional Paper 009.*1977.

Byrd, L. (ed.)*User's guide to EMAS Prolog.* Edinburgh U.  A.I.  
Dept.*Occasional Paper 026.*1981.

Demopoulos, W.*The rejection of truth conditional semantics by Putnam 
and Dummett.* Western Ontario U. Cognitive Science Centre.*COGMEM 
06.*1982.

Goto, E. Soma, T. Inada, N. Ida, T. Idesawa, M. Hiraki, K.  Suzuki, M.
Shimizu, K. Philipov, B.*Design of a Lisp machine - FLATS.* Tokyo U.
Info.Sci.Dept.*Tech.Rpt.  82-09.*1982.

Griswold, R.E.*The control of searching and backtracking in string 
pattern matching.* Arizona U. Comp.Sci.Dept.*TR 82-20.*1982.

Hagiya, M.*A proof description language and its reduction system.* 
Tokyo U. Info.Sci.Dept.*Tech.Rpt. 82-03.*1982.

Itai, A. Makowsky, J.*On the complexity of Herbrand's theorem.* 
Technion - Israel Inst. of Tech.  Comp.Sci.Dept.*Tech.Rpt. 243.*1982.

Kautz, H.A.*A first-order dynamic logic for planning.* Toronto U.  
Comp. Systems Res. Group.*CSRG-144.*1982.

Kozen, D.C.*Results on the propositional mu-calculus.* Aarhus U.  
Comp.Sci.Dept.*DAIMI PB-146.*1982.

Makowsky, J.A. Tiomkin, M.L.*An array assignment for propositional 
dynamic logic.* Technion - Israel Inst. of Tech.  
Comp.Sci.Dept.*Tech.Rpt. 234.*1982.

Manna, Z. Pneuli, A.*How to cook a temporal proof system for your pet 
language.* Stanford U. Comp.Sci.Dept.* STAN-CS-82-954.*1982.  14p.

Mosses, P.* Abstract semantic algebras!* Aarhus U.  
Comp.Sci.Dept.*DAIMI PB-145.*1982.

Orlowska, E.*Logic of vague concepts: applications of rough sets.* 
Polish Academy of Sciences. Inst. of Comp.Sci.*ICS PAS rpt. no.  
474.*1982.

Sakamura, K. Ishikawa, C.* High level machine design by dynamic 
tuning.* Tokyo U. Info.Sci.Dept.*Tech.Rpt.  82-07.*1982.

Sato, M.*Algebraic structure of symbolic expressions.* Tokyo U.  
Info.Sci.Dept.*Tech.Rpt. 82-05.*1982.

Shapiro, E.Y.* Alternation and the computational complexity of logic 
programs.* Yale U. Comp.Sci.Dept.*Res.Rpt. 239.* 1982.

Stabler, E.P. Jr.* Database and theorem prover designs for question 
answering systems.* Western Ontario U. Cognitive Science 
Centre.*COGMEM 12.*1982.

Sterling, L. Bundy, A.* Meta level inference and program 
verification.* Edinburgh U. A.I. Dept.*Res. Paper 168.* 1982.

Treleaven, P.C. Gouveia Lima, I.*Japan's fifth generation computer 
systems.* Newcastle upon Tyne U. Computing Lab.* No. 176.*1982.

------------------------------

End of AIList Digest
********************

∂09-Aug-83  2027	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #36
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Aug 83  20:26:59 PDT
Date: Tuesday, August 9, 1983 10:26AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #36
To: AIList@SRI-AI


AIList Digest            Tuesday, 9 Aug 1983       Volume 1 : Issue 36

Today's Topics:
  Robotics - Bibliography,
  Vision - Bibliography,
  Speech Understanding - Bibliography,
  Pattern Recognition - Bibliography
----------------------------------------------------------------------

Date: Tue 9 Aug 83 09:22:41-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Robotics Bibliography

Ambler, A.P. Popplestone, R.J. Kempf, K.G.*An experiment in the 
offline programming of robots.* Edinburgh U. A.I.  Dept.*Res. Paper 
170.*1982.

Ambler, A.P.* RAPT: an object level robot programming language.* 
Edinburgh U. A.I. Dept.*Res. Paper 172.*1982.

Brooks, R.A. Lozano-Perez, T.*A subdivision algorithm in configuration
space for findpath with rotation.* M.I.T.  A.I.  Lab.*Memo 684.*1982.

Brooks, R.A.*Solving the find path problem by representing free space 
as generalized cones.* M.I.T. A.I. Lab.*Memo 674.*1982.

Brooks, R.A.*Symbolic error analysis and robot planning.* M.I.T. A.I.
Lab.*Memo 685.*1982.

Cameron, S.* Body models for every body.* Edinburgh U.  A.I.  
Dept.*Working Paper 107.*1982.

Gueting, R.H. Wood, D.*Finding rectangle intersections by 
divide-and-conquer.* McMaster U. Comp.Sci. Unit.* Comp.Sci. Tech.Rpt.
No. 82-CS-04.*1982.

Hofri, M.*BIN packing: an analysis of the next fit algorithm.* 
Technion - Israel Inst. of Tech.  Comp.Sci.Dept.*Tech.Rpt. 242.*1982.

Hollerbach, J.M.*Computers, brains, and the control of movement.* 
M.I.T. A.I. Lab.*Memo 686.*1982.

Hollerbach, J.M.*Dynamic scaling of manipulator trajectories.* M.I.T.
A.I. Lab.*Memo 700.*1982.

Hollerbach, J.M.*Workshop on the design and control of dexterous hands
(held at the MIT Artificial Intelligence Laboratory on November 5-6,
1981).* M.I.T. A.I. Lab.*Memo 661.*1982.

Hopcroft, J.E. Joseph, D.A. Whitesides, S.H.*On the movement of robot 
arms in 2-dimensional bounded regions.*Cornell U.  
Comp.Sci.Dept.*Tech.Rpt. 82-486.*1982.

Kirkpatrick, D.* Optimal search in planar subdivisions.* British 
Columbia U. Comp.Sci.Dept.*Tech.Rpt. 81-13.*1981.

Kouta, M.M. O'Rourke, J.*Fast algorithms for polygon decomposition.* 
Johns Hopkins U. E.E. & Comp.Sci.Dept.* Tech.Rpt. 82/10.*1982.

Koutsou, A.*A survey of model bases robot programming languages.* 
Edinburgh U. A.I. Dept.*Working Paper 108.* 1981.

Lozano-Perez, T.* Robot programming.* M.I.T. A.I. Lab.*Memo 698.*1982.

Mason, M.T.* Manipulator grasping and pushing operations.* M.I.T.  
A.I. Lab.*TR-690, Ph.D. Thesis. Mason, M.T.*1982.

Mavaddat, F.* WATSON/I: WATerloo's SONically guided robot.*Waterloo U.
Comp.Sci.Dept.*Res.Rpt. CS-82-16.*1982.

Moran, S.*On the densest packing of circles in convex figures.* 
Technion - Israel Inst. of Tech. Comp.Sci.Dept.* Tech.Rpt. 241.*1982.

Mujtaba, M.S.* Motion sequencing of manipulators.* Stanford U.  
Comp.Sci.Dept.*STAN-CS-82-917, Ph.D. Thesis.  Mujtaba, M.S.  
(Department of Industrial Engineering and Engineering 
Management).*1982.  291p.

Myers, E.W.*An O(ElogE+I) expected time algorithm for the planar 
segment intersection problem.* Arizona U.  Comp.Sci.Dept.*TR 
82-03.*1982.

Popplestone, R.J.*Discussion document on body modelling for robot 
languages.* Edinburgh U. A.I. Dept.*Working Paper 110.*1982.

Shneier, M.* Hierarchical sensory processes for 3-D robot vision.* 
Maryland U. Comp.Sci. Center.*TR-1165.*1982.

Slotnick, D.L.* Time constrained computation.* Illinois U.  
Comp.Sci.Dept.*UIUCDCS-R-82-1090.*1982.

Srinivasan, C.V.*Notes on object centered associative memory 
organization.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TR-19.*1981.

Taylor, R.H.*An integrated robot system architecture.* IBM Watson Res.
Center.*RC 9824.*1983.

Yin, B.*A proposal for studying how to use vision within a robot 
language which reasons about spatial relationships.*Edinburgh U. A.I.
Dept.*Working Paper 109.*1982.

------------------------------

Date: Tue 9 Aug 83 09:54:44-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Vision Bibliography


A. Athukorala, Some Hardware for Computer Vision.  Edinburgh U. A.I.
Dept., Working Paper 102, 1981.

H.H. Baker, Depth from Edge and Intensity Based Stereo.  Ph.D. Thesis,
Stanford U. Comp. Sci. Dept., STAN-CS-82-930; Stanford U. Comp. Sci.
Dept. A.I. Lab., AIM-347, 1982, 90p.  Based on a Ph.D. thesis
submitted to the University of Illinois at Urbana-Champaign in
September of 1981.

R.J. Beattie, Edge Detection for Semantically Based Early Visual
Processing.  Edinburgh U. A.I. Dept., Res. Paper 174, 1982.

M. Brady and W.E.L. Grimson, The Perception of Subjective Surfaces.  
M.I.T. A.I. Lab., Memo 666, 1981.

I. Chakravarty, The Use of Characteristic Views as a Basis for
Recognition of Three-Dimensional Objects.  Rensselaer Polytechnic
Inst. Image Processing Lab., IPL-TR-034, 1982.

L. Dreschler, Ermittlung markanter Punkte auf den Bildern bewegter
Opjekte und Berechnung einer 3D-Beschreibung auf dieser Grundlage.
Hamburg U. Fachbereich Informatik, Bericht Nr. 83, 1981.

J.-O. Eklundh, Knowledge Based Image Analysis: Some Aspects of Images
using Other Types of Information.  Royal Inst. of Tech., Stockholm,
Num.Anal. & Computing Sci. Dept., TRITA-NA-8206, 1982.

R.B. Fisher, A Structured Pattern Matching Approach to Intermediate
Level Vision.  Edinburgh U. A.I. Dept., Res. Paper 177, 1982.

W.B. Gevarter, An Overview of Computer Vision.  U.S. National Bureau
of Standards, NBSIR 82-2582, 1982.

W.E.L. Grimson, The Implicit Constraints of the Primal Sketch.  M.I.T.
A.I. Lab., Memo 663, 1981.

W.I. Grosky, Towards a Data Model for Integrated Pictorial Databases.
Wayne State U. Comp. Sci. Dept., CSC-82-012, 1982.

R.F. Hauser, Some experiments with stochastic edge detection, IBM
Watson Res. Center, RZ 1210, 1983.

E.C. Hildreth and S. Ullman, The Measurement of Visual Motion.  M.I.T.
A.I. Lab., Memo 699, 1982.

T. Kanade (ed.), Vision.  Stanford U. Comp. Sci. Dept., 
STAN-CS-82-938; Stanford U. Comp. Sci. Dept. Heuristic Programming
Project, HPP-82-020, 1982, 220p.  Assistant Editor: Steven A. Shafer.
Contributors:  David A. Bourne, Rodney Brooks, Nancy H. Cornelius,
James L. Crowley, Hiromichi Fujisawa, Martin Herman, Fuminobu Komura,
Bruce D. Lucas, Steven A. Shafer, David R. Smith, Steven L. Tanimoto, 
Charles E. Thorpe.

A. Krzesinski, The normalised convolution algorithm, IBM Watson Res.
Center, RC 9834, 1983.

M.A. Lavin and L.I. Lieberman, AML/V: An Industrial Machine Vision
Programming System.  IBM Watson Res. Center, RC 9390, 1982.

C.N. Liu, M. Fatemi, and R.C. Waag, Digital Processing for Improvement
of Ultrasonic Abdominal Images.  IBM Watson Res. Center, RC 9499, 
1982.

D. Montuno and A. Fournier, Detecting intersection among star
polygons, Toronto U. Comp. Systems Res. Group, CSRG-146, 1982.

T.N. Mudge and T.A. Rahman, Efficiency of feature dependent
algorithms for the parallel processing of images, Michigan U.
Computing Res.  Lab., CRL-TR-11-83, 1983.

T.M. Nicholl, D.T. Lee, Y.Z. Liao, and C.K. Wong, Constructing the X-Y
convex hull of a set of X-Y polygons, IBM Watson Res. Center, RC 9737,
1982.

E. Pervin and J.A. Webb, Quaternions in computer vision and robotics, 
Carnegie Mellon U. Comp. Sci. Dept., CMU-CS-82-150, 1982.

T. Poggio, H.K. Nishihara, and K.R.K. Nielsen, Zero Crossings and
Spatiotemporal Interpolation in Vision: Aliasing and Electrical
Coupling Between Sensors.  M.I.T. A.I. Lab., Memo 675, 1982.

T. Poggio, Visual Algorithms.  M.I.T. A.I. Lab., Memo 683, 1982.

W. Richards, H.K. Nishihara, and B. Dawson, CARTOON: A Biologically
Motivated Edge Detection Algorithm.  M.I.T. A.I. Lab., Memo 668, 1982.

A. Rosenfeld, Computer vision, Maryland U. Comp. Sci. Center, TR-1157,
1982.

A. Rosenfeld, Trends and perspectives in computer vision, Maryland U.
Comp. Sci. Center, TR-1194, 1982.

I.K. Sethi and R. Jain, Determining Three Dimensional Structure of
Rotating Objects.  Wayne State U. Comp. Sci. Dept., CSC-83-001, 1983.

M. Shneier, Hierarchical sensory processes for 3-D robot vision, 
Maryland U. Comp. Sci. Center, TR-1165, 1982.

C.L. Sidner, Protocols of Users Manipulating Visually Presented
Information with Natural Language.  Bolt, Beranek and Newman, Inc.,
BBN 5128, 1982.

R.W. Sjoberg, Atmospheric Effects in Satellite Imaging of Mountainous
Terrain.  M.I.T. A.I. Lab., TR-688,

S.N. Srihari, Pyramid representations for solids, SUNY, Buffalo, Comp.
Sci. Dept., Tech.Rpt. 200, 1983.

K.A. Stevens, Implementation of a Theory for Inferring Surface Shape
from Contours.  M.I.T. A.I. Lab., Memo 676, 1982.

D. Terzopoulos, Multi-Level Reconstruction of Visual Surfaces:
Variational Principles and Finite Element Representations.  M.I.T.
A.I. Lab., Memo 671, 1982.

R.Y. Tsai, Multiframe Image Point Matching and 3-D Surface
Reconstruction.  IBM Watson Res. Center, RC 9398, 1982.

R.Y. Tsai and T.S. Huang, Analysis of 3-D Time Varying Scene.  IBM
Watson Res. Center, RC 9479, 1982.

R.Y. Tsai, 3-D inference from the motion parallax of a conic arc and
a point in two perspective views, IBM Watson Res. Center, RC 9818,
1983.

R.Y. Tsai, Estimating 3-D motion parameters and object surface
structures from the image motion of conic arcs, I: theoretical basis,
IBM Watson Res. Center, RC 9787, 1983.

R.Y. Tsai, Estimating 3-D motion parameters and object surface
structures from the image motion of conic arcs, IBM Watson Res.
Center, RC 9819, 1983.

L. Uhr and L. Schmitt, The Several Steps from ICON to SYMBOL, using
Structured Cone/Pyramids.  Wisconsin U. Comp. Sci. Dept., Tech.Rpt.
481, 1982.

P.H. Winston, T.O. Binford, B. Katz, and M. Lowry, Learning Physical
Descriptions from Functional Definitions, Examples, and Precedents.
M.I.T. A.I. Lab., Memo 679, 1982.  1982.

M.-M. Yau, Generating quadtrees of cross-sections from octrees, SUNY,
Buffalo, Comp. Sci. Dept., Tech.Rpt. 199, 1982.

B. Yin, A Proposal for Studying How to Use Vision Within a Robot
Language which Reasons about Spatial Relationships.  Edinburgh U.
A.I. Dept., Working Paper 109, 1982.

------------------------------

Date: Tue 9 Aug 83 08:54:15-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Speech Understanding Bibliography

Lucassen, J.M.*Discovering phonemic base forms automatically: an 
information theoretic approach.* IBM Watson Res. Center.*RC 
9833.*1983.

Waibel, A.*Towards very large vocabulary word recognition.* Carnegie 
Mellon U. Comp.Sci.Dept.*CMU-CS-82-144.*1982.

------------------------------

Date: Tue 9 Aug 83 08:49:15-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Pattern Recognition Bibliography

Barnes, E.R.*An algorithm for separating patterns by ellipsoids.* IBM 
Watson Res. Center.*RC 9500.*1982.

Chiang, W.P. Teorey, T.J.*A method for database record clustering.* 
Michigan U. Computing Res.Lab.* CRL-TR-05-82.*1982.

Findler, N.V. Cromp, R.F.*An artificial intelligence technique to 
generate self-optimizing experimental designs.* Arizona State U.  
Comp.Sci.Dept.*TR-83-001.* 1983.

Findler, N.V. Lo, R.*A note on the functional estimation of values of 
hidden variables--an extended module for expert systems.* Arizona 
State U. Comp.Sci.Dept.*TR-82-004.* 1982.

Jenkins, J.M.* Symposium on computer applications to cardiology:  
introduction and automated electrocardiography and arrhythmia 
monitoring.* Michigan U. Computing Res.Lab.*CRL-TR-20-83.*1983.

Kumar, V. Kanal, L.N.* Branch and bound formulations for sequential 
and parallel And/Or tree search and their applications to pattern 
analysis and game playing.* Maryland U. Comp.Sci.  
Center.*TR-1144.*1982.

O'Rourke, J.*The signature of a curve and its applications to pattern 
recognition (preliminary version).* Johns Hopkins U. E.E. & 
Comp.Sci.Dept.*Tech.Rpt. 82/09.*1982.

Seidel, R.*A convex hull algorithm for point sets in even dimensions.*
British Columbia U. Comp.Sci.Dept.*Tech.Rpt.  81-14.*1981.

Varah, J.M.*On fitting exponentials by nonlinear least squares.* 
British Columbia U. Comp.Sci.Dept.*Tech.Rpt.  82-02.*1982.

------------------------------

End of AIList Digest
********************

∂09-Aug-83  2149	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #37
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Aug 83  21:47:14 PDT
Date: Tuesday, August 9, 1983 10:33AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #37
To: AIList@SRI-AI


AIList Digest            Tuesday, 9 Aug 1983       Volume 1 : Issue 37

Today's Topics:
  Representation - Bibliography,
  Natural Language Understanding - Bibliography,
  Cognition - Bibliography
----------------------------------------------------------------------

Date: Tue 9 Aug 83 08:51:05-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Representation Bibliography

Abdallah, M.A.N.* Data types as algorithms.* Waterloo U.  
Comp.Sci.Dept.*Res.Rpt. CS-82-10.*1982.

Alterman, R.E.*A system of seven coherence relations for 
hierarchically organizing event concepts in text.* Texas U.  
Comp.Sci.Dept.*TR-209.*1982.

Amit, Y.*Review of conceptual dependency theory.* Edinburgh U. A.I.  
Dept.*Occasional Paper 008.*1977.

Andrews, G.R. Schneider, F.B.*Concepts and notations for concurrent 
programming.* Arizona U. Comp.Sci.Dept.*TR 82-12.*1982.

Ericson, L.W.* DPL-82: a language for distributed processing.* 
Carnegie Mellon U. Comp.Sci.Dept.* CMU-CS-82-129.*1982.

Forbus, K.D.* Qualitative process theory.* M.I.T. A.I.  Lab.*Memo 
664.*1982.

Katz, R.H. Lehman, T.J.*Storage structures for versions and 
alternatives.* Wisconsin U. Comp.Sci.Dept.*Tech.Rpt.  479.*1982.

Lucas, P. Risch, T.*Representation of factual information by equations
and their evaluation.* IBM Watson Res.  Center.*RJ 3362.*1982.

Luger, G.F.*Some artificial intelligence techniques for describing 
problem solving behaviour.* Edinburgh U. A.I.  Dept.*Occasional Paper 
007.*1977.

Lytinen, S.L. Schank, R.C.* Representation and translation.*Yale U.  
Comp.Sci.Dept.*Res.Rpt. 234.*1982.

Mahr, B. Makowsky, J.A.*Characterizing specification languages which 
admit initial semantics.* Technion - Israel Inst. of Tech.  
Comp.Sci.Dept.*Tech.Rpt. 232.*1982.

Mercer, R.E. Reiter, R.*The representation of presuppositions using 
defaults.* British Columbia U.  Comp.Sci.Dept.*Tech.Rpt. 82-01.*1982.

Orlowska, E. Pawlak, Z.*Representation of nondeterministic 
information.* Polish Academy of Sciences. Inst. of Comp.Sci.*ICS PAS 
Rpt. No. 450.*1981.

Orlowska, E.*Logic of vague concepts: applications of rough sets.* 
Polish Academy of Sciences. Inst. of Comp.Sci.*ICS PAS rpt. no.  
474.*1982.

Orlowska, E.*Semantics of vague concepts: application of rough sets.* 
Polish Academy of Sciences. Inst. of Comp.Sci.*ICS PAS rpt. no.  
469.*1982.

Pawlak, Z.* Rough functions.* Polish Academy of Sciences.  Inst. of 
Comp.Sci.*ICS PAS rpt. no. 467.*1981.

Pawlak, Z.* Rough sets: power set hierarchy.* Polish Academy of 
Sciences. Inst. of Comp.Sci.*ICS PAS rpt. no.  470.*1982.

Pawlak, Z.*About conflicts.* Polish Academy of Sciences.  Inst. of 
Comp.Sci.*ICS PAS Rpt. No. 451.*1981.

Pawlak, Z.*Some remarks about rough sets.* Polish Academy of Sciences.
Inst. of Comp.Sci.*ICS PAS rpt. no. 456.* 1982.

Sridharan, N.S.*A flexible structure for knowledge: examples of legal 
concepts.* Rutgers U. Comp.Sci.Res.Lab.* LRP-TR-014.*1982.

Srinivasan, C.V.*Notes on object centered associative memory 
organization.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TR-19.*1981.

Weiser, M. Israel, B. Stanfill, C. Trigg, R. Wood, R.* Working papers 
in knowledge representation and acquisition.* Maryland U. Comp.Sci.  
Center.*TR-1175.* 1982.  Contents: Israel, B. Weiser, M.*Towards a 
perceptual system for monitoring computer behavior; Stanfill, C.* 
Geometry to causality: a hierarchy of subdomains for machine world; 
Trigg, R.*Acquiring knowledge for an electronic textbook; Wood, R.J.*A
model for interactive program synthesis.

Winston, P.H. Binford, T.O. Katz, B. Lowry, M.* Learning physical 
descriptions from functional definitions, examples, and precedents.* 
M.I.T. A.I. Lab.*Memo 679.* 1982.

Woods, W.A. Bates, M. Bobrow, R.J. Goodman, B. Israel, D.  Schmolze, 
J. Schudy, R. Sidner, C.L. Vilain, M.*Research in knowledge 
representation for natural language understanding. Annual report: 1 
September 1981 to 31 August 1982.* Bolt, Beranek and Newman, Inc.*BBN 
rpt.  5188.*1982.

------------------------------

Date: Tue 9 Aug 83 08:46:47-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Natural Language Understanding Bibliography

Allen, E.M.*Acquiring linguistic knowledge for word experts.* Maryland
U. Comp.Sci. Center.*TR-1166.

Alterman, R.E.*A system of seven coherence relations for 
hierarchically organizing event concepts in text.* Texas U.  
Comp.Sci.Dept.*TR-209.*1982.

Amit, Y.*Review of conceptual dependency theory.* Edinburgh U. A.I.  
Dept.*Occasional Paper 008.*1977.

Ballard, B.W. Lusth, J.C.*An English-language processing system which 
'learns' about new domains.* Duke U.  Comp.Sci.Dept.*CS-1982-18.*1982.

Ballard, B.W.*A "domain class" approach to transportable natural 
language processing.* Duke U. Comp.Sci.Dept.* CS-1982-11.*1982.

Bancilhon, F. Richard, P.* TQL, a textual query language.* 
INRIA.*Rapport de Recherche 145.*1982.

Barr, A. Cohen, P.R. Fagan, L.*Understanding spoken language (Chapter 
V of Volume I of the Handbook of Artificial Intelligence, edited by 
Avron Barr and Edward A. Feigenbaum).* Stanford U. Comp.Sci.Dept.* 
STAN-CS-82-934; Stanford U. Comp.Sci.Dept. Heuristic Programming 
Project.*HPP-82-016.*1982.  52p.

Black, J.B. Galambos, J.A. Read, S.* Story comprehension.* Yale U.  
Cognitive Science Program.*Tech.Rpt. 017.*1982.

Black, J.B. Seifert, C.M.*The psychological study of story 
understanding.* Yale U. Cognitive Science Program.* Tech.Rpt.  
018.*1982.

Black, J.B. Wilkes-Gibbs, D. Gibbs, R.W. Jr.*What writers need to know
that they don't know they need to know.* Yale U. Cognitive Science
Program.*Tech.Rpt. 08.*1981.

Carbonell, J.G.* Meta-language utterances in purposive discourse.* 
Carnegie Mellon U. Comp.Sci.Dept.* CMU-CS-82-125.*1982.

Clinkenbeard, D.J.*A quite general text analysis method.* Colorado U.
Comp.Sci.Dept.*CU-CS-237-82.*1982.

Culik, K. Natour, I.A.* Ambiguity types of formal grammars.*Wayne 
State U. Comp.Sci.Dept.*CSC-82-014.*1982.

Dellarosa, D. Bourne, L.E. Jr.*Text-based decisions: changes in the 
availability of facts due to instructions and the passage of time.* 
Colorado U. Cognitive Sci.Inst.* Tech.rpt. 115-ONR.*1982.

Denny, J.P.* Whorf's Algonquian: old evidence and new ideas concerning
linguistic relativity.* Western Ontario U.  Cognitive Science
Centre.*COGMEM 11.*1982.

Dolev, D. Reischuk, R. Strong, H.R.*'Eventual' is earlier than 
'immediate'.* IBM Watson Res. Center.*RJ 3632.*1982.

Dyer, M.G.*In-depth understanding: a computer model of integrated 
processing for narrative comprehension.* Yale U.  
Comp.Sci.Dept.*Res.Rpt. 219, Ph.D. Thesis. Dyer, M.G.* 1982.

Gawron, J.M. King, J.J. Lamping, J. Loebner, J.J. Paulson, E.A.  
Pullum, G.K. Sag, I.A. Wasow, T.A.*Processing English with a 
generalized phrase structure grammar.* Hewlett Packard Co.  
Comp.Sci.Lab.*CSL-82-005.*1982.

Greene, B.R. Fujisake, T.*A probabilistic approach for dealing with 
ambiguous syntactic structures.* IBM Watson Res. Center.*RC 
9764.*1982.

Hartmanis, J.*On Goedel speed-up and succinctness of language 
representation.* Cornell U. Comp.Sci.Dept.* Tech.Rpt. 82-485.*1982.

Israel, D.J.*On interpreting semantic network formalisms.* Bolt, 
Beranek and Newman, Inc.*BBN rpt. 5117.*1982.

Jensen, K. Heidorn, G.E.*The fitted parse: 100% parsing capability in 
a syntactic grammar of English.* IBM Watson Res. Center.*RC 
9729.*1982.  graphs.* IBM Watson Res. Center.*RC 9642.*1982.

Johnson, P.N. Robertson, S.P.* MAGPIE: a goal based model of 
conversation.* Yale U. Comp.Sci.Dept.*Res.Rpt. 206.* 1981.

Katz, B. Winston, P.H.* Parsing and generating English using 
commutative transformations.* M.I.T. A.I. Lab.* Memo 677.*1982.

Lamping, J. King, J.J.* LM/GPSG--a prototype workstation for 
linguists.* Hewlett Packard Co. Comp.Sci.Lab.* CSL-82-011; Hewlett 
Packard Co. Comp.Res. Center.* CRC-Tr-82-006.*1982.

Lehnert, W. Dyer, M.G. Johnson, P.N. Yang, C.J. Harley, S.* BORIS: an 
experiment in in-depth understanding of narratives.* Yale U.  
Comp.Sci.Dept.*Res.Rpt. 188.*1981.

Lehnert, W.G.* Affect units and narrative summarization.* Yale U.  
Comp.Sci.Dept.*Res.Rpt. 179.*1980.

Lytinen, S.L. Schank, R.C.* Representation and translation.* Yale U.  
Comp.Sci.Dept.*Res.Rpt. 234.*1982.

Mann, W.C. Matthiessen, C.M.I.M.*Two discourse generators, by William 
C. Mann; A grammar and a lexicon for a text production system, by 
Christian M.I.M. Matthiessen.* Southern Cal U.  
Info.Sci.Inst.*ISI/RR-82-102.*1982.

Mann, W.C.*The anatomy of a systemic choice.* Southern Cal U.  
Info.Sci.Inst.*ISI/RR-82-104.*1982.

Martin, P.A.*Integrating local information to understand dialog.* 
Stanford U. Comp.Sci.Dept.*STAN-CS-82-941; Stanford U. Comp.Sci.Dept.
A.I. Lab.*AIM-348, Ph.D.  Thesis. Martin, P.A.*1982.  125p.

Miller, L.A.*" Natural language text are not necessarily grammatical 
and unambiguous. Or even complete".* IBM Watson Res. Center.*RC 
9441.*1982.

Misek-Falkoff, L.D.*The new field of software linguistics: an 
early-bird view.* IBM Watson Res. Center.*RC 9421.* 1982.

Misek-Falkoff, L.D.* Software science and natural language: a 
unification of Halstead's counting rules for programs and English 
text, and a claim space approach to extensions.* IBM Watson Res.  
Center.*RC 9420.*1982.

Mueckstein, E.-M.M.* Parsing for collecting syntactic statistics.* IBM
Watson Res. Center.*RC 9836.*1983.

Mueckstein, E.M.M.* Q-Trans: query translation into English.* IBM 
Watson Res. Center.*RC 9841.*1983.

Perlman, G.* Natural artificial languages: low-level processes.* Cal.
U., San Diego. Human Info. Proces.  Center.*Rpt. 8208.*1982.

Peterson, J.L.* Webster's seventh new collegiate dictionary: a 
computer-readable file format.* Texas U.  Comp.Sci.Dept.*TR-196.*1982.

Reiser, B.J. Black, J.B. Lehnert, W.G.* Thematic knowledge structures 
in the understanding and generation of narratives.* Yale U. Cognitive 
Science Program.* Tech.Rpt. 016.*1982.

Reiser, B.J. Black, J.B.*Processing and structural models of 
comprehension.* Yale U. Cognitive Science Program.* Tech.Rpt.  
012.*1982.

Schank, R.C. Burstein, M.*Modeling memory for language understanding.*
Yale U. Comp.Sci.Dept.*Res.Rpt. 220.* 1982.

Schank, R.C. Collins, G.C. Davis, E. Johnson, P.N. Lytinen, S.  
Reiser, B.J.*What's the point?* Yale U.  Comp.Sci.Dept.*Res.Rpt.  
205.*1981.

Shwartz, S.P.*The search for pronominal referents.* Yale U. Cognitive 
Science Program.*Tech.Rpt. 10.*1981.

Sidner, C.L. Bates, M.*Requirements for natural language understanding
in a system with graphic displays.* Bolt, Beranek and Newman, Inc.*BBN
rpt. 5242.*1983.

Sidner, C.L.* Protocols of users manipulating visually presented 
information with natural language.* Bolt, Beranek and Newman, Inc.*BBN
rpt. 5128.*1982.

Smith, D.E.* FOCUSER: a strategic interaction paradigm for language 
acquisition.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TR-36, Ph.D. Thesis.
Smith, D.E.*1982.

Stabler, E.P. Jr.* Programs, rule governed behavior and grammars in 
theories of language acquisition and use.* Western Ontario U.  
Cognitive Science Centre.*COGMEM 07.* 1982.

Usui, T.*An experimental grammar for translating English to Japanese.*
Texas U. Comp.Sci.Dept.*TR-201.*1982.

Wilensky, R.*Talking to UNIX in English: an overview of an on-line 
consultant.* California U., Berkeley.  Comp.Sci.Div.*UCB/CSD 
82/104.*1982.

Woods, W.A. Bates, M. Bobrow, R.J. Goodman, B. Israel, D.  Schmolze, 
J. Schudy, R. Sidner, C.L. Vilain, M.*Research in knowledge 
representation for natural language understanding. Annual report: 1 
September 1981 to 31 August 1982.* Bolt, Beranek and Newman, Inc.*BBN 
rpt.  5188.*1982.

------------------------------

Date: Tue 9 Aug 83 08:45:21-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Cognition Bibliography

Ballard, B.W. Lusth, J.C.*An English-language processing system which 
'learns' about new domains.* Duke U. Comp.Sci.Dept.,CS-1982-18,1982.

Barr, A.* Artificial intelligence: cognition as computation.* Stanford
U. Comp.Sci.Dept.*STAN-CS-82-956; Stanford U. Comp.Sci.Dept.  
Heuristic Programming Project.* HPP-82-29.*1982.  28p.

Black, J.B. Galambos, J.A. Reiser, B.J.*Coordinating discovery and 
verification research.* Yale U. Cognitive Science Program.*Tech.Rpt.  
013.*1982.

Black, J.B. Galambos, J.A. Read, S.* Story comprehension.* Yale U.  
Cognitive Science Program.*Tech.Rpt. 017.*1982.

Black, J.B. Seifert, C.M.*The psychological study of story 
understanding.* Yale U. Cognitive Science Program.* Tech.Rpt.  
018.*1982.

Black, J.B. Wilkes-Gibbs, D. Gibbs, R.W. Jr.*What writers need to know
that they don't know they need to know.* Yale U. Cognitive Science
Program.*Tech.Rpt. 08.*1981.

Bonar, J. Soloway, E.*Uncovering principles of novice programming.* 
Yale U. Comp.Sci.Dept.*Res.Rpt. 240.*1982.

Carbonell, J.G.* Learning by analogy: formulating and generalizing 
plans from past experience.* Carnegie Mellon U.  
Comp.Sci.Dept.*CMU-CS-82-126.*1982.

Carroll, J.M. Mack, R.L.* Metaphor, computing systems, and active 
learning.* IBM Watson Res. Center.*RC 9636.*1982.  schemes.* IBM 
Watson Res. Center.*RJ 3645.*1982.

Cohen, P.R.*Models of cognition (Chapter XI of Volume III of the 
Handbook of Artificial Intelligence, edited by Paul R. Cohen and 
Edward A. Feigenbaum).* Stanford U.  Comp.Sci.Dept.*STAN-CS-82-936; 
Stanford U. Comp.Sci.Dept.  Heuristic Programming 
Project.*HPP-82-018.*1982.  87p.

Conrad, M.* Microscopic macroscopic interface in biological 
information processing.* Wayne State U. Comp.Sci.Dept.* 
CSC-83-003.*1983.

Doyle, J.*The foundations of psychology: a logico-computational 
inquiry into the concept of mind.* Carnegie Mellon U.  
Comp.Sci.Dept.*CMU-CS-82-149.*1982.

Dyer, M.G.*In-depth understanding: a computer model of integrated 
processing for narrative comprehension.* Yale U.  
Comp.Sci.Dept.*Res.Rpt. 219, Ph.D. Thesis. Dyer, M.G.* 1982.

Ehrlich, K. Soloway, E.*An empirical investigation of the tacit plan 
knowledge in programming.* Yale U.  Comp.Sci.Dept.*Res.Rpt.  
236.*1982.

Ericsson, K.A. Chase, W.G.* Exceptional memory.* Carnegie Mellon U.  
Psych.Dept.*Tech.Rpt. 08.*1982.

Firdman, H.E.*Toward a theory of cognizing systems: the search for an 
integrated theory of AI.* Hewlett Packard Co.  
Comp.Sci.Lab.*CSL-82-007; Hewlett Packard Co.  Comp.Res.  
Center.*CRC-TR-82-002.*1982.

Galambos, J.A.*Normative studies of six characteristics of our 
knowledge of common activities.* Yale U. Cognitive Science 
Program.*Tech.Rpt. 014.*1982.

Good, D.I.* Reusable problem domain theories.* Texas U.  Computing 
Sci.Inst.*TR-031.*1982.

Hollerbach, J.M.*Computers, brains, and the control of movement.* 
M.I.T. A.I. Lab.*Memo 686.*1982.

Israel, D.J.*On interpreting semantic network formalisms.* Bolt, 
Beranek and Newman, Inc.*BBN rpt. 5117.*1982.

Kampfner, R.R. Conrad, M.*Sequential behavior and stability properties
of enzymatic neuron networks.* Wayne State U.  
Comp.Sci.Dept.*CSC-82-011.*1982.

Lansner, A.* Information processing in a network of model neurons: a 
computer simulation study.* Royal Inst. of Tech., Stockholm.  
Num.Anal. & Computing Sci.Dept.* TRITA-NA-8211.*1982.

Mather, J.A.* Saccadic eye movements to seen and unseen targets:  
preprogramming and sensory input in motor control.* Western Ontario U.
Cognitive Science Centre.* COGMEM 10.*1982.

Mitchell, T.M. Utgoff, P.E. Banerji, R.* Learning by experimentation:
acquiring and modifying problem solving heuristics.* Rutgers U.  
Comp.Sci.Res.Lab.*LCSR-TR-31.* 1982.

Poggio, T. Koch, C.*Nonlinear interactions in a dendritic tree:  
localization, timing, and role in information processing.* M.I.T.  
A.I. Lab.*Memo 657.*1981.

Reiser, B.J. Black, J.B. Lehnert, W.G.* Thematic knowledge structures 
in the understanding and generation of narratives.* Yale U. Cognitive 
Science Program.* Tech.Rpt. 016.*1982.

Richards, W. Nishihara, H.K. Dawson, B.* CARTOON: a biologically 
motivated edge detection algorithm.* M.I.T.  A.I. Lab.*Memo 668.*1982.

Schank, R.C. Burstein, M.*Modeling memory for language understanding.*
Yale U. Comp.Sci.Dept.*Res.Rpt. 220.* 1982.

Schank, R.C.*Representing meaning: an artificial intelligence 
perspective.* Yale U. Cognitive Science Program.*Tech.Rpt. 11.*1981.

Seifert, C.M. Robertson, S.P.*On-line processing of pragmatic 
inferences.* Yale U. Cognitive Science Program.*Tech.Rpt. 015.*1982.

Shwartz, S.P.*Three-dimensional mental rotation revisited: picture 
plane rotation is really faster than depth rotation.* Yale U.  
Cognitive Science Program.*Tech.Rpt.  09.*1981.

Sidner, C.L.* Protocols of users manipulating visually presented 
information with natural language.* Bolt, Beranek and Newman, Inc.*BBN
rpt. 5128.*1982.

Smith, D.E.* FOCUSER: a strategic interaction paradigm for language 
acquisition.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TR-36, Ph.D. Thesis.
Smith, D.E.*1982.

Soloway, E. Bonar, J. Ehrlich, K.* Cognitive strategies and looping 
constructs: an empirical study.* Yale U.  Comp.Sci.Dept.*Res.Rpt.  
242.*1982.

Soloway, E. Ehrlich, K. Bonar, J. Greenspan, J.*What do novices know 
about programming?* Yale U. Comp.Sci.Dept.* Res.Rpt. 218.*1982.

Srinivasan, C.V.*Notes on object centered associative memory 
organization.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TR-19.*1981.

Stabler, E.P. Jr.* Programs, rule governed behavior and grammars in 
theories of language acquisition and use.* Western Ontario U.  
Cognitive Science Centre.*COGMEM 07.* 1982.

Utgoff, P.E.*Acquisition of appropriate bias for inductive concept 
learning.* Rutgers U. Comp.Sci.Res.Lab.* LCSR-TM-02.*1982.

------------------------------

End of AIList Digest
********************

∂09-Aug-83  2330	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #38
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Aug 83  23:17:23 PDT
Date: Tuesday, August 9, 1983 10:38AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #38
To: AIList@SRI-AI


AIList Digest            Tuesday, 9 Aug 1983       Volume 1 : Issue 38

Today's Topics:
  Programming - Bibliography,
  Databases - Bibliography,
  Computer Science - Bibliography
----------------------------------------------------------------------

Date: Tue 9 Aug 83 08:50:19-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Programming Bibliography

[Includes programming environments and techniques
as well as automatic programming.]

Abdallah, M.A.N.* Data types as algorithms.* Waterloo U.  
Comp.Sci.Dept.*Res.Rpt. CS-82-10.*1982.

Andrews, G.R. Schneider, F.B.*Concepts and notations for concurrent 
programming.* Arizona U. Comp.Sci.Dept.*TR 82-12.*1982.

Andrews, G.R. Schneider, F.B.*Concepts and notations for concurrent 
programming.* Arizona U. Comp.Sci.Dept.*TR 82-12.*1982.

Andrews, G.R.* Distributed programming languages.* Arizona U.  
Comp.Sci.Dept.*TR 82-13.*1982.

Archer, J.E. Jr.*The design and implementation of a cooperative 
program development environment.* Cornell U.  Comp.Sci.Dept.*Tech.rpt.
81-468, Ph.D. Thesis. Archer, J.E. Jr.*1982.

Bakker, J.W. de Zucker, J.I.* Processes and the denotational semantics
of concurrency.* Mathematisch Centrum.*IW 209/82.*1982.

Barber, G.*Supporting organizational problem solving with a 
workstation.* M.I.T. A.I. Lab.*Memo 681.*1982.

Bergstra, J.A. Klop, J.W.* Fixed point semantics in process algebras.*
Mathematisch Centrum.*IW 206/82.*1982.

Bergstra, J.A. Tucker, J.V.* Hoare's logic for programming languages 
with two data types.* Mathematisch Centrum.*IW 207/82.*1982.

Best, E.* Relational semantics of concurrent programs (with some 
applications).* Newcastle Upon Tyne U. Computing Lab.*No. 180.*1982.

Bobrow, D.G. Stefik, M.*The LOOPS manual (preliminary version).* 
Xerox. Palo Alto Res. Center.*Memo KB-VLSI-81-13.*1981, (working 
paper).

Bonar, J. Soloway, E.*Uncovering principles of novice programming.* 
Yale U. Comp.Sci.Dept.*Res.Rpt. 240.*1982.

Burger, W.F. Halim, N. Pershing, J.A. Parr, F.N. Strom, R.E. Yemini, 
S.*Draft NIL reference manual.* IBM Watson Res. Center.*RC 9732.*1982.

Culik, K. Rizki, M.M.* Mathematical constructive proofs as computer 
programs.* Wayne State U. Comp.Sci.Dept.* CSC-83-004.*1983.

diSessa, A.A.*A principled design for an integrated computational 
environment.* M.I.T. Lab. for Comp.Sci.* TM-223.*1982.

Ehrlich, K. Soloway, E.*An empirical investigation of the tacit plan 
knowledge in programming.* Yale U.  Comp.Sci.Dept.*Res.Rpt.  
236.*1982.

Elrad, T. Francez, N.*A weakest precondition semantics for 
communicating processes.* Technion - Israel Inst. of Tech.  
Comp.Sci.Dept.*Tech.Rpt. 244.*1982.

Ericson, L.W.* DPL-82: a language for distributed processing.* 
Carnegie Mellon U. Comp.Sci.Dept.* CMU-CS-82-129.*1982.

Eyries, F.*Synthese d'images de scenes composees de spheres.* 
INRIA.*Rapport de Recherche 163.*1982.

Good, D.I.*The proof of a distributed system in GYPSY.* Texas U.  
Computing Sci.Inst.*TR-030.*1982.

Israel, B.*Customizing a personal computing environment through object
oriented programming.* Maryland U.  Comp.Sci.  Center.*TR-1158.*1982.

Jobmann, M.*ILMAOS - Eine Sprache zur Formulierung von 
Rechensystemmodellen.* Hamburg U. Fachbereich Informatik.* Bericht Nr.
91.*1982.

Kanasaki, K. Yamaguchi, K. Kunii, T.L.*A software development system 
supported by a database of structures and operations.* Tokyo U.  
Info.Sci.Dept.*Tech.Rpt.  82-15.*1982.

Kant, E. Newell, A.* Problem solving techniques for the design of 
algorithms.* Carnegie Mellon U. Comp.Sci.Dept.* CMU-CS-82-145.*1982.

Krafft, D.B.* AVID: a system for the interactive development of 
verifiably correct programs.* Cornell U.  Comp.Sci.Dept.*Tech.rpt.  
81-467.*1981.

Lacos, C.A. McDermott, T.S.*Interfacing with the user of a syntax 
directed editor.* Tasmania U. Info.Sci.Dept.*No.  R82-03.*1982.

Lamping, J. King, J.J.* IZZI--a translator from Interlisp to 
Zetalisp.* Hewlett Packard Co. Comp.Sci.Lab.* CSL-82-010; Hewlett 
Packard Co. Comp.Res. Center.* CRC-TR-82-005.*1982.

LeBlanc, T.J.*The design and performance of high level language 
primitives for distributed programming.* Wisconsin U.  
Comp.Sci.Dept.*Tech.Rpt. 492, Ph.D. Thesis.  LeBlanc, T.J.*1982.

Lengauer, C.*A methodology for programming with concurrency.* Toronto 
U. Comp. Systems Res. Group.* CSRG-142, Ph.D. Thesis. Lengauer, 
C.*1982.

Lesser, V. Corkill, D. Pavlin, J. Lefkowitz, L. Hudlicka, E. Brooks, 
R. Reed, S.*A high-level simulation testbed for cooperative 
distributed problem solving.* Massachusetts U. Comp. & 
Info.Sci.Dept.*COINS Tech.Rpt.  81-16.*1981.

Lieberman, H.*Seeing what your programs are doing.* M.I.T.  A.I.  
Lab.*Memo 656.*1982.

Lochovsky, F.H.* Alpha beta, edited by F.H. Lochovsky.* Toronto U.  
Comp. Systems Res. Group.*CSRG-143.*1982.  Contents: (1) Lochovsky, 
F.H. Tsichritzis, D.C.* Interactive query language for external data 
bases; (2) Mendelzon, A.O.*A database editor; (3) Lee, D.L.*A voice 
response system for an office information system; (4) Gibbs, S.J.* 
Office information models and the representation of 'office objects'; 
(5) Martin, P.* Tsichritzis, D.C.*A message management model; (6) 
Nierstrasz, O.*Tsichritzis, D.C.* Message flow modeling; (7) 
Tsichritzis, D.C. Christodoulakis, S. Faloutsos, C.* Design 
considerations for a message file server.

Mahr, B. Makowsky, J.A.*Characterizing specification languages which 
admit initial semantics.* Technion - Israel Inst. of Tech.  
Comp.Sci.Dept.*Tech.Rpt. 232.*1982.

McAllester, D.A.* Reasoning utility package. User's manual.  Version 
one.* M.I.T. A.I. Lab.*Memo 667.*1982.

Medina-Mora, R.* Syntax directed editing: towards integrated 
programming environments.* Carnegie Mellon U.  Comp.Sci.Dept.* Ph.D.  
Thesis. Medina-Mora, R.*1982.

Melese, B.* Metal, un langage de specification pour le systeme 
mentor.* INRIA.*Rapport de Recherche 142.*1982.

Olsen, D.R. Jr. Badler, N.*An expression model for graphical command 
languages.* Arizona State U.  Comp.Sci.Dept.*TR-82-001.*1982.

Paige, R.* Transformational programming--applications to algorithms 
and systems: summary paper.* Rutgers U.  
Comp.Sci.Dept.*DCS-TR-118.*1982.

Parr, F.N. Strom, R.E.* NIL: a high level language for distributed 
systems programming.* IBM Watson Res.  Center.*RC 9750.*1982.

Pratt, V.*Five paradigm shifts in programming language design and 
their realization in Viron, a dataflow programming environment.* 
Stanford U. Comp.Sci.Dept.* STAN-CS-82-951.*1982.  9p.

Rosenstein, L.S.* Display management in an integrated office 
workstation.* M.I.T. Lab for Comp.Sci.*TR-278.* 1982.

Ross, P.M.* TERAK LOGO user's manual (for version 1 - 0).* Edinburgh 
U. A.I. Dept.*Occasional Paper 021.*1980.

Schlichting, R.D. Schneider, F.B.*Using message passing for 
distributed programming: proof rules and disciplines.* Arizona U.  
Comp.Sci.Dept.*TR 82-05.*1982.

Schmidt, E.E.*Controlling large software development in a distributed 
environment.* Xerox. Palo Alto Res.  Center.*CSL-82-07, Ph.D. Thesis.
Schmidt, E.E. (University of California at Berkeley).*1982.

Senach, B.*Aide a la resolution de probleme par presentation graphique
des informations.* INRIA.*Rapport de Recherche 013.*1982.

Soloway, E. Bonar, J. Ehrlich, K.* Cognitive strategies and looping 
constructs: an empirical study.* Yale U.  Comp.Sci.Dept.*Res.Rpt.  
242.*1982.

Soloway, E. Ehrlich, K. Bonar, J. Greenspan, J.*What do novices know 
about programming?* Yale U. Comp.Sci.Dept.* Res.Rpt. 218.*1982.

Stefik, M. Bell, A.G. Bobrow, D.G.* Rule oriented programming in 
LOOPS.* Xerox. Palo Alto Res. Center.*Memo KB-VLSI-82-22.*1982.  
(working paper).

Sterling, L. Bundy, A.* Meta level inference and program 
verification.* Edinburgh U. A.I. Dept.*Res. Paper 168.* 1982.

Sterling, L. Bundy, A. Byrd, L. O'Keefe, R. Silver, B.* Solving 
symbolic equations with PRESS.* Edinburgh U.  A.I. Dept.*Res. Paper 
171.*1982.

Tappel, S. Westfold, S. Barr, A.* Programming languages for AI 
research (Chapter VI of Volume II of the Handbook of Artificial 
Intelligence, edited by Avron Barr and Edward A. Feigenbaum).* 
Stanford U. Comp.Sci.Dept.* STAN-CS-82-935; Stanford U.  
Comp.Sci.Dept. Heuristic Programming Project.*HPP-82-017.*1982.  90p.

Theriault, D.*A primer for the Act-1 language.* M.I.T.  A.I.  
Lab.*Memo 672.*1982.

Thompson, H.*Handling metarules in a parser for GPSG.  Edinburgh U.  
A.I. Dept.*Res. Paper 175.*1982.

Walker, A.* PROLOG/EX1: an inference engine which explains both yes 
and no answers.* IBM Watson Res. Center.*RJ 3771.*1983.

Waters, R.C.* LetS: an expressional loop notation.* M.I.T.  A.I.  
Lab.*Memo 680a.*1983.

Wilensky, R.*Talking to UNIX in English: an overview of an on-line 
consultant.* California U., Berkeley.  Comp.Sci.Div.*UCB/CSD 
82/104.*1982.

Wolper, P.L.*Synthesis of communicating processes from temporal logic 
specifications.* Stanford U.  Comp.Sci.Dept.*STAN-CS-82-925, Ph.D.  
Thesis. Wolper, P.L.* 1982.  111p.

Wood, R.J.* Franz flavors: an implementation of abstract data types in
an applicative language.* Maryland U.  Comp.Sci.  
Center.*TR-1174.*1982.

Woods, D.R.*Drawing planar graphs.* Stanford U.  
Comp.Sci.Dept.*STAN-CS-82-943, Ph.D. Thesis. Woods, D.R.* 1981.

------------------------------

Date: Tue 9 Aug 83 08:55:06-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Database Bibliography

Bancilhon, F. Richard, P.* TQL, a textual query language.* 
INRIA.*Rapport de Recherche 145.*1982.

Bossi, A. Ghezzi, C.*Using FP as a query language for relational 
data-bases.* Milan. Politecnico. Dipartimento di Elettronica. Lab. di 
Calcolatori.*Rapporto Interno N.  82-11.*1982.

Cooke, M.P.*A speech controlled information retrieval system.* U.K.  
National Physical Lab. Info. Technology and Computing Div.*DITC 
15/83.*1983.

Corson, Y.*Aspects psychologiques lies a l'interrogation d'une base de
donnees.* INRIA.*Rapport de Recherche 126.* 1982.

Cosmadakis, S.S.*The complexity of evaluation relational queries.* 
M.I.T. Lab. for Comp.Sci.*TM-229.*1982.

Daniels, D. Selinger, P. Haas, L. Lindsay, B. Mohan, C.  Walker, A.  
Wilms, P.*An introduction to distributed query compilation in R.* IBM 
Watson Res. Center.*RJ 3497.*1982.

Gonnet, G.H.* Unstructured data bases.* Waterloo U.  
Comp.Sci.Dept.*Res.Rpt. CS-82-09.*1982.

Griswold, R.E.*The control of searching and backtracking in string 
pattern matching.* Arizona U. Comp.Sci.Dept.*TR 82-20.*1982.

Grosky, W.I.*Towards a data model for integrated pictorial databases.*
Wayne State U. Comp.Sci.Dept.*CSC-82-012.* 1982.

Haas, L.M. Selinger, P.G. Bertino, E. Daniels, D. Lindsay, B. Lohman, 
G. Masunaga, Y. Mogan, C. Ng, P. Wilms, P.  Yost, R.* R*: a research 
project on distributed relational DBMS.* IBM Watson Res. Center.*RJ 
3653.*1982.

Hailpern, B.T. Korth, H.F.*An experimental distributed database 
system.* IBM Watson Res. Center.*RC 9678.*1982.

Jenny, C.*Methodologies for placing files and processes in systems 
with decentralized intelligence.* IBM Watson Res. Center.*RZ 
1139.*1982.

Kanasaki, K. Yamaguchi, K. Kunii, T.L.*A software development system 
supported by a database of structures and operations.* Tokyo U.  
Info.Sci.Dept.*Tech.Rpt.  82-15.*1982.

Klug, A.*On conjunctive queries containing inequalities.* Wisconsin U.
Comp.Sci.Dept.*Tech.Rpt. 477.*1982.

Konikowska, B.* Information systems: on queries containing k-ary 
descriptors.* Polish Academy of Sciences. Inst. of Comp.Sci.*ICS PAS 
rpt. no. 466.*1982.

Lochovsky, F.H.* Alpha beta, edited by F.H. Lochovsky.* Toronto U.  
Comp. Systems Res. Group.*CSRG-143.*1982.  Contents: (1) Lochovsky, 
F.H. Tsichritzis, D.C.* Interactive query language for external data 
bases; (2) Mendelzon, A.O.*A database editor; (3) Lee, D.L.*A voice 
response system for an office information system; (4) Gibbs, S.J.* 
Office information models and the representation of 'office objects'; 
(5) Martin, P.* Tsichritzis, D.C.*A message management model; (6) 
Nierstrasz, O.*Tsichritzis, D.C.* Message flow modeling; (7) 
Tsichritzis, D.C. Christodoulakis, S. Faloutsos, C.* Design 
considerations for a message file server.

Lohman, G.M. Stoltzfus, J.C. Benson, A.N. Martin, M.D.  Cardenas, 
A.F.* Remotely sensed geophysical databases: experience and 
implications for generalized DBMS.* IBM Watson Res. Center.*RJ 
3794.*1983.

Madelaine, E.*Le systeme perluette et les preuves de representation de
types abstraits.* INRIA.*Rapport de Recherche 133.*1982.

Maier, D. Ullman, J.D.* Fragments of relations.* Stanford U.  
Comp.Sci.Dept.*STAN-CS-82-929.*1982.  11p.

Michard, A.*A new database query language for non-professional users:
design principles and ergonomic evaluation.* INRIA.*Rapport de 
Recherche 127.*1982.

Ng, P.* Distributed compilation and recompilation of database 
queries.* IBM Watson Res. Center.*RJ 3375.*1982.

Srivas, M.K.*Automatic synthesis of implementations for abstract data 
types from algebraic specifications.* M.I.T. Lab for Comp.Sci.*TR-276,
Ph.D. Thesis. Srivas, M.K. (This report is a minor revision of a
thesis of the same title submitted to the Department of Electrical
Engineering and Computer Science in December 1981).*1982.

Stabler, E.P. Jr.* Database and theorem prover designs for question 
answering systems.* Western Ontario U. Cognitive Science 
Centre.*COGMEM 12.*1982.

Stamos, J.W.*A large object oriented virtual memory: grouping 
strategies, measurements, and performance.* Xerox. Palo Alto Res.  
Center.*SCG-82-02.*1982.

Wald, J.A. Sorenson, P.G.*Resolving the query inference problem using 
Steiner trees.* Saskatchewan U.  Computational 
Sci.Dept.*Rpt.83-04.*1983.

Weyer, S.A.* Searching for information in a dynamic book.* Xerox.  
Palo Alto Res. Center.*SCG-82-01, Ph.D. Thesis.  Weyer, S.A.  
(Stanford University).*1982.

------------------------------

Date: Tue 9 Aug 83 08:56:36-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Computer Science Bibliography

[Includes selected topics in CS that seem relevant to AIList
and are not covered in the preceeding bibliographies.]

Eppinger, J.L.*An empirical study of insertion and deletion in binary 
search trees.* Carnegie Mellon U.  Comp.Sci.Dept.*CMU-CS-82-146.*1982.

Gilmore, P.C.*Solvable cases of the travelling salesman problem.* 
British Columbia U. Comp.Sci.Dept.*Tech.Rpt.  81-08.*1981.

Graham, R.L. Hell, P.*On the history of the minimum spanning tree 
problem.* Simon Fraser U. Computing Sci.Dept.*TR 82-05.*1982.

Gupta, A. Hon, R.W.*Two papers on circuit extraction.* Carnegie Mellon
U. Comp.Sci.Dept.*CMU-CS-82-147.*1982.  Contents: Gupta, A.* ACE: a
circuit extractor; Gupta, A.  Hon, R.W.* HEXT: a hierarchical circuit
extractor.

Hofri, M.*BIN packing: an analysis of the next fit algorithm.* 
Technion - Israel Inst. of Tech.  Comp.Sci.Dept.*Tech.Rpt. 242.*1982.

Jomier, G.*An overview of systems modelling and evaluation 
tendencies.* INRIA.*Rapport de Recherche 134.*1982.

Jurkiewicz, E.* Stability of compromise solution in multicriteria 
decision making problem.* Polish Academy of Sciences. Inst. of 
Comp.Sci.*ICS PAS rpt. no. 455.*1981.

Kirkpatrick, D.G. Hell, P.*On the complexity of general graph factor 
problems.* British Columbia U.  Comp.Sci.Dept.*Tech.Rpt. 81-07.*1981.

Kjelldahl, L. Romberger, S.*Requirements for interactive editing of 
diagrams.* Royal Inst. of Tech., Stockholm.  Num.Anal. & Computing 
Sci.Dept.*TRITA-NA-8303.*1983.

Moran, S.*On the densest packing of circles in convex figures.* 
Technion - Israel Inst. of Tech. Comp.Sci.Dept.* Tech.Rpt. 241.*1982.

Nau, D. Kumar, V. Kanal, L.*General branch and bound and its relation 
to A* and AO*.* Maryland U. Comp.Sci.  Center.*TR-1170.*1982.

Nau, D.S.* Pathology on game trees revisited, and an alternative to 
minimaxing.* Maryland U. Comp.Sci.  Center.*TR-1187.*1982.

Roberts, B.J. Marashian, I.* Bibliography of Stanford computer science
reports, 1963-1982.* Stanford U.  Comp.Sci.Dept.*STAN-CS-82-911.*1982.
59p.

Scowen, R.S.*An introduction and handbook for the standard syntactic 
metalanguage.* U.K. National Physical Lab.  Info. Technology and 
Computing Div.*DITC 19/83.*1983.

Seidel, R.*A convex hull algorithm for point sets in even dimensions.*
British Columbia U. Comp.Sci.Dept.*Tech.Rpt.  81-14.*1981.

Varah, J.M.*Pitfalls in the numerical solution of linear ill posed 
problems.* British Columbia U. Comp.Sci.Dept.* Tech.Rpt. 81-10.*1981.

Wegman, M.*Summarizing graphs by regular expressions.* IBM Watson Res.
Center.*RC 9364.*1982.

------------------------------

End of AIList Digest
********************

∂16-Aug-83  1113	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #39
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Aug 83  11:06:44 PDT
Date: Friday, August 12, 1983 9:06AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #39
To: AIList@SRI-AI


AIList Digest            Friday, 12 Aug 1983       Volume 1 : Issue 39

Today's Topics:
  Textnet - Publish Adventure,
  Representation - Current Adequacy,
  Computational Complexity - NP-Completeness & FFP Machine,
  Programming Languages - Functional Programming,
  Fifth Generation - Opinion & Pearl Harbor Correction,
  Programming Languages & Humor - Comment
----------------------------------------------------------------------

Date: 11-Aug-83 13:52 PDT
From: Kirk Kelley  <KIRK.TYM@OFFICE-2>
Subject: Re: Textnet

I have spent most spare minutes for the last ten years designing a
distributed hyper-service using NLS and Augment as a development tool.
We can simulate, via electronic mail, the beginnings of a
self-descriptive service-service called the "Publish adventure".  The
Xanadu project's Hypertext, because of its devotion to static text, is
a degenerate case of the Publish adventure.  If you are interested in
collaborating on the design of the protocol, let me know.

 -- Kirk Kelley

------------------------------

Date: 10 Aug 83 16:36:29-PDT (Wed)
From: harpo!floyd!vax135!cornell!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: A Real AI Topic
Article-I.D.: ssc-vax.398

First let me get in a one last (?) remark about where the Japanese are
in AI - pattern recognition and robotics are useful but marginal in
the AI world.  Some of the pattern recognition work seems to be making
the same conclusions now that real AI workers made ten years ago
(those who don't know history are doomed to repeat it!).

Now on to the good stuff.  I have been thinking about knowledge 
representation (KR) recently and made some interesting (to me, anyway)
observations.

1.  Certain KRs tend to show up again and again, though perhaps in
    well-disguised forms.

2.  All the existing KRs can be cast into something like an
    attribute-value representation.

Space does not permit going into all the details, but as an example,
the PHRAN language analyzer from Berkeley is actually a specialized
production rule system, although its origins were elsewhere (in
parsers using demons).  Semantic nets are considered obsolete and ad
hoc, but predicate logic reps end up looking an awful lot like a net
(so does a sizeable frame system).  A production rule has two
attributes: the condition and the action.  Object-oriented programming
(smalltalk and flavors) uses the concept of attributes (instance
variables) attached to objects.  There are other examples.

Question: is there something fundamentally important and inescapable 
about attribute-value pairs attached to symbols?  (ordinary program 
code is a representation of knowledge, but doesn't look like av-pairs
- is it a valid counterexample?)

What other possible KRs are there?

Certain KRs (such as RLL (which is really a very interesting system)) 
claim to be universal and capable of representing anything.  Are there
any particularly difficult concepts that *no* KR has been able to
represent (even in a crude way)?  What is so difficult about those
concepts, if any such exist?

                                Just stirring up the mud,
                                stan the leprechaun hacker
                                ssc-vax!sts (soon utah-cs)


[I believe that planning systems still have difficulties in
representing continuous time, hypothetical worlds, beliefs, and
intentions, among other things.  In vision, robotics, geology, and
medicine, there are difficulties in representing shape, texture, and
spatial relationships.  Attribute-value pairs are just not very
useful for representing continuous quantities.  -- KIL]

------------------------------

Date: Mon 8 Aug 83 17:19:42-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: NP-completeness

    I forward this message because it raises an interesting point, and
I thought readers may care to see it. I had a reply to this, but
perhaps someone else may care to comment.

  Date:     Sun,  7 Aug 83 18:28:09 CDT
  From: Mike.Caplinger <mike.rice@Rand-Relay>

  Claiming that a parallel machine makes NP-complete problems
  polynomial (given that the machine has an infinite number of
  processing elements) is certainly true (by the definition of
  NP-completeness), but meaningless.  Admittedly, a large number of
  processing elements might make a finitely-bounded algorithm faster,
  but any finitely-bounded algorithm is a constant time algorithm.
  (If I say N is never greater than the number of processors, then N
  might as well be a constant.)

------------------------------

Date: 10 Aug 83 13:19:32-PDT (Wed)
From: ihnp4!we13!burl!duke!unc!koala @ Ucb-Vax
Subject: Matrix Multiplication on the FFP Machine
Article-I.D.: unc.5687

        Since the subject has been brought up, I felt I should clear
up some of the statements about the FFP machine.  The machine consists
of a linear vector of small processors which communicate by being
connected as the leaves of a binary tree.

        Roughly speaking, the FFP machine performs general matrix
multiplication in O(nxn) space and time.  Systolic arrays can multiply
matrices in O(n) time, but do not provide a flexibility in the size of
matrices that can be handled.

        Order notation only presents half the picture - in real life,
constant factors and other terms are also important.  The machine's
matrix multiply operation examines each element of the two matrices
once.  Multiplying two matrices, mxn and nxp, requires accessing (mxn
+ nxp) values, and this is the measure of the time for the
computation.  Each cell performs n multiplications, dominated by the
access.  Further, when you multiply two matrices, mxn and nxp, the
result is of size mxp.  (Consider multiplying a column by a row).
Thus, when n < (mxp)/(m+p), extra space must be allocated for the
result.  This is also a quadratic time operation.

                                David Middleton
                                UNC Chapel Hill
                                decvax!duke!unc!koala

------------------------------

Date: 11 Aug 83 16:23:19-PDT (Thu)
From: harpo!gummo!whuxlb!floyd!vax135!cornell!uw-beaver!ssc-vax!sts@Ucb-Vax
Subject: Re: Matrix Multiplication on the FFP Machine
Article-I.D.: ssc-vax.406

I must admit to being a little sloppy when giving the maximum speed of
a matrix multiplication on an FFP machine (haven't worked on this 
stuff for a year, and my memory is slipping).  I still stand by the 
original statement, however.  The *maximum* possible speed for the 
multiplication of two nxn matrices is O(log n).  What I should have 
done is state that the machine architecture is completely unspecified.
I am not convinced that the Mago tree machine is the ultimate in FFP
designs, although it is very interesting.  The achievement of O(log n)
requires several things.  Let me enumerate.  First, assume that the
matrix elements are already distributed to their processors.  Second,
assume that a single processor can quickly distribute a value to 
arbitrarily many processors (easy: put it on the bus (buss? :-} ) and
let the processors all go through a read cycle simultaneously).  
Third, assume that the processors can communicate in such a way that
addition of n numbers can be performed in log n time (by adding pairs,
then pairs of pairs, etc).  Then the distribution of values takes
constant time, the multiplications are all done simultaneously and so
take constant time, leaving only the summation to slow things down.  I
know this is fast and loose; its main failing is that it assumes the
availability of an extraordinarily high number of communication paths
(the exact number is left as an exercise for the reader).

                                        stan the leprechaun hacker
                                        ssc-vax!sts (soon utah-cs)

ps For those not familiar with FP, read J. Backus' Turing Lecture in
CACM (Aug 78, I believe) - it is very readable, also he gives a
one-liner for matrix multiplication in FP, which I used as a basis for
the timing hackery above

------------------------------

Date: 11 Aug 83 19:32:18-PDT (Thu)
From: harpo!floyd!vax135!cornell!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Functional Programming and AI
Article-I.D.: ssc-vax.408

It is interesting that the subject of FP (an old interest of mine) has
arisen in the AI newsgroup (no this is not an "appropriate newsgroup"
flame).  Having worked with both AI and FP languages, it seems to me
that the two are diametrically opposed to one another.  The ultimate
goal of functional programming language research is to produce a
language that is as clean and free of side effects as possible; one
whose semantic definition fits on a single side of an 8 1/2 x 11 sheet
of paper (and not in microform, smart-aleck!).  On the other hand, the
goal of AI research (at least in the AI language area) is to produce
languages that can effectively work with as tangled and complicated 
representations of knowledge as possible.  Languages for semantic 
nets, frames, production systems, etc, all have this character.  
Formal definitions are at best difficult, and sometimes impossible 
(aside: could this be proved for any specific knowledge rep?).  Now
between the Japanese 5th generation project (and the US response) and
the various projects to build non-vonNeumann machines using FP, it
looks to me like the seeds of a controversy over the best way to do
programming.  Should we be using FP languages or AI languages?  We
can't have it both ways, right?  Or can we?

                                        stan the leprechaun hacker
                                        ssc-vax!sts (soon utah-cs)

------------------------------

Date: Mon 8 Aug 83 13:58:36-PDT
From: Robert Amsler <AMSLER@SRI-AI.ARPA>
Subject: Japanese 5th Generation Effort

It seems to me that the 5th generation effort differs from most
efforts we are familiar with in being strictly top-down. That is to
say, the Japanese are willing to start work not only without knowing
how to solve the nitty-gritty problems at the bottom--but without
knowing what those nitty-gritty problems actually are. Although
dangerous, this is a very powerful research strategy. Until it gets
bogged down due to an almost insurmountable number of unsolvable 
technical problems one can expect very rapid progress indeed. When it
does get bogged down, their understanding of the problems will be as
great as that of anyone else in the world. The best way to learn is by
doing.

------------------------------

Date: 9-AUG-1983 15:24
From: SHRAGER%CMU-PSY-A@CMU-CS-PT
Subject: On Science and the Fifth Generation


I'm a little confused about why this Japanese business seems to be
scaring the pants off of the US research community; why scientists are
quoted in national news magazines as being "panic stricken", and why
terms like "race" and "ahead" are being thrown around in a community
of "scientists"; why anyone cares if the fifth generation thing is
propoganda or not.  You'll find out when they make it work or they
don't!

Science is a cooperative effort.  If Japan wants to jump forward
(note, not "ahead" in any sense) in technology and understanding it is
the position of every other scientist to applaud their boldness and
provide every ounce of critical advice we can give them.  So what if
symbolics goes bankrupt becuase Japan makes a machine that makes the
3600 look like an Apple!? It will probably cost one third as much and
I'll be able to have one on my desk to further my research efforts.
Likewise, whatever the Japanese research community learns will
certainly benefit my research, even if just by learning what roads are
not fruitful.

Worry about the arms race, not the computer race!  Work as hard as you
can to further science and technology, not to beat the Japanese!  Work
toward the Nth generation, not the fifth or the sixth or the
seventh....  A little competition is probably useful sometimes, but
not to the detrement of the community spirit of science.  If we start
hiding things from one another, do we have the right to call ourselves
scientists?

When I begin to worry is when Japan decides to build a better MX
missle, not a better computer system.  Then issues of scientific
morals are involved and it's a whole 'nother ballgame.

------------------------------

Date: 9 Aug 83 21:04:30-PDT (Tue)
From: decvax!microsoft!uw-beaver!ssc-vax!tjj @ Ucb-Vax
Subject: Re: Pearl Harbor Day
Article-I.D.: ssc-vax.393

OK folks, especially those of you from various parts of tektronix-land
who don't seem to have access to or have interest in reading a history
book, let's review the bidding for your edification at least.  A very
unsavory reference was made in the context of a remark from a
present-day visiting professor from Japan regarding the Japanese Fifth
Generation Project.  The first bid for a date was 5 Dec 1948.  This
was changed by the same author after he received at least one
electronic mail reply to 5 Dec 1945!  This may have been with
tongue-in-cheek, as I know that he was given the correct date at least
once prior to his second message.  It's a matter of record that the
Japanese Ambassador was instructed to visit the Secretary of State on
Friday, December 5, 1941.  Whether he or his representative were again
doing so on Sunday, December 7, 1941 is a moot point, as I am certain
that they were very busy at the old trash incinerator that morning.  
Although we should not forget history, lest we be doomed to repeat it,
I do think that comparison of this episode with the present day 5th
Generation Project, even in the context of the devastation of Detroit,
is stretching things beyond the breaking point.  If you want to flame,
send mail to me, as I already have my asbestos suit on, but let's
graduate net.ai back to something more appropriate and certainly more
interesting.

TJ (with Amazing Grace) The Piper ssc-vax!tjj

------------------------------

Date: 10 Aug 83 12:02:09-PDT (Wed)
From: teklabs!done @ Ucb-Vax
Subject: Re: 5th generation computers
Article-I.D.: teklabs.2322

<flame on>

I can't stand this any longer:

   "YESTERDAY, DECEMBER 7, 1941; A DATE WHICH WILL LIVE IN INFAMY!"

Carefully memorize this date and PLEASE DON'T SCREW IT UP AGAIN.  Or
maybe infamy needs to be expressed in binary for you Computer Science 
folks.

<flame off>

Don Ellis   | USENET:  {aat,cbosg,decvax,harpo,ihnss,orstcs,pur-ee,ssc-vax
Tektronix   |          ucbvax,unc,zehntel,ogcvax,reed} !teklabs!done
Oregon, USA | ARPAnet: done.tek@rand-relay    CSNet: done@tek

------------------------------

Date: 10 Aug 1983 1244-EDT
From: MONTALVO%MIT-OZ@MIT-ML
Subject: Re: HFELISP

   Date: 27 Jul 1983 0942-PDT
   From: Jay <JAY@USC-ECLC>
   Subject: HFELISP

           HFELISP (Heffer Lisp) HUMAN FACTOR ENGINEERED LISP

                                   ABSTRACT

     HFE sugests that the more complicated features of (common) Lisp
   are dangerous, and hard to understand.  As a result a number of
   Fortran, Cobol, and 370 assembler programmers got together with a
   housewife. ...

How dare you malign the good sense of housewives by classing them with
Fortran, Cobol, and 370 assembler programmers!

Fanya Montalvo

------------------------------

End of AIList Digest
********************

∂16-Aug-83  1333	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #40
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Aug 83  13:26:49 PDT
Date: Tuesday, August 16, 1983 9:10AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #40
To: AIList@SRI-AI


AIList Digest            Tuesday, 16 Aug 1983      Volume 1 : Issue 40

Today's Topics:
  Knowledge Representation & Applicative Languages,
  Fifth Generation - Military Potential,
  Artificial Intelligence - Bigotry & Turing Test
----------------------------------------------------------------------

Date: Friday, 12 Aug 1983 15:28-PDT
From: narain@rand-unix
Subject: Reply to stan the leprechaun hacker


I am responding to two of the points you raised.

Attribute value pairs are hopeless for any area (including AI areas)
where your "cognitive chunks" are complex structures (like trees). An
example is symbolic algebraic manipulation, where it is natural to
think in terms of general forms of algebaraic expressions. Try writing
a symbolic differentiation program in terms of attribute-value pairs.
Another example is the "logic grammars" for natural language, whose
implementation in Prolog is extremely clear and efficient.

As to whether FP or more generally applicative languages are useful to
AI depends upon the point of view you take of AI. A useful view is to
consider it as "advanced programming" where you wish to develop 
intelligent computer programs, and so develop powerful computational
methods for them, even if humans do not use those methods. From this
point of view Backus's comments about the "von Neumann bottleneck"
apply equally to AI programming as they do to conventional
programming. Hence applicative languages may have ideas that could
solve the "software crisis" in AI as well.

This is not just surmise; the Prolog applications to date and underway
are evidence in favor of the power of applicative languages. You may
debate about the "applicativeness" of practical Prolog programming,
but in my opinion the best and (also the most efficient) Prolog
programs are in essence "applicative".

-- Sanjai Narain

------------------------------

Date: 12 Aug 1983 1208-PDT
From: FC01@USC-ECL
Subject: Knowledge Representation, Fifth Generation

About knowledge representation---

        Although many are new to this ballgame, the fundamentals of
the field are well established. Look in the dictionary of information
science a few years back (5-10?) for an article on the representation
of knowledge by Irwin Marin.  The (M,R) pair mentioned is indeed a
general structure for representation. In fact, you may recal 10 or 20
years ago there was talk that the most efficient programs on computers
would eventually consist of many many pointers (Rs) that pointed
between datums (Ms) in may different ways - kinda like the brain!!! It
has gone well beyond the (M,R) pair stage and Marin has developed a
structure for representation that allows top down knowledge
engineering to proceed in a systematic fashion. I guess many of us
forsake history in many ways, both social and technical.

        As to the 'race' to 5th generation computers, it may indeed be
a means to further the military industrial complex in the area of
computing, but let us also consider the tactical implications of a
highly intelligent (take the term with a grain of salt when speeking
of a computer) tactical computer. Perhaps the complexities of battle
could be simplified for human consumption to the point where a good
general could indeed win an otherwise lost war. Perhaps not. The 
scientific sharing of ideas has always been the boon of science and
the bust of government. The U.S. is in an advantageous vantage point
from the boom point of view because we share so much with each other
and others. We are also tops in the bust category because it is so
easy to get our information to other places.  Somewhere the scientific
need for communication must be traded off with the possible effects of
the research. This is what I call scientific responsibility.  As
scientists we are responsible not only to our research and the
dissemination of our knowledge, but also responsible for the effects
of that knowledge. If we shared the 'secrets' of the atomic bomb with
the world as we developed it, do you think more or fewer people would
have died? I think the Germans (who were also working on the project)
might have been able to complete their version sooner and would have
killed a great number more people. In the case of Japan, we are
talking economic struggle rather than political, but the concept of
war and destruction can be visualized just as well. A small country
using a very rapid economic growth to push ahead of the rest of the
world, now has no place to expand to. Heard it before? What new
technology will be developed using the new generation of computers?
Can we afford to lose our edge in yet another technological area to
the more eager of the world? Is this just another ploy of the M.I.
complex to get money from the people and take food from the hungry?
Tough questions, without the facts hard to answer.

                                        Another controversy ignited or
                                        enflamed by yours truly,
                                                Fred

------------------------------

Date: 12 Aug 1983 15:09-PDT
From: andy at -[VAX]
Subject: Japan's supercomputers as potential defense threat


    I'm a little confused about why this Japanese business seems to be
    scaring the pants off of the US research community... why
    anyone cares if the fifth generation thing is propoganda or not.
    You'll find out when they make it work or they don't!  ...Worry
    about the arms race, not the computer race!
                        -- SHRAGER%CMU-PSY-A@CMU-CS-PT

One serious reason for concern, at least according to political 
conservatives, is that the United States would cease to be in a 
position to control the distribution of the world's most advanced 
computing technology.

Currently, there are specific export restrictions to prohibit transfer
of advanced technology from the U.S. to its putative enemies (e.g. the
Soviet Union).  (For example, I was told not long ago that it is 
illegal to fly over France carrying the schematics for a Cyber in your
briefcase.)

The reason for this becomes quite clear when you consider who the 
principal consumers of supercomputers are in this country: they are 
disproportionally well represented by people pursuing nuclear energy 
and weapons R&D, cryptology, and war gaming.  If the Japanese have the
fastest computers, then they control distribution of the hottest 
computational technology and at least potentially could sell it to 
countries that DoD would prefer to remain well behind us
technologically.  Worse, they might sell it to others but not to the
United States.

While there are lapses in the effectiveness of this sort of export 
control, it seems to work fairly well overall.  For example, I
recently read that the East Germans have just successfully fabricated
a Z-80 chip clone; reportedly, although their chip does seem to work,
it is substantially inferior to the state of the art here.  If the
best that "blacklisted" countries can do is play catch-up via reverse 
engineering, the U.S. Government will have met its practical goal of 
denying them up-to-date technology.  If, on the other hand, other 
countries are able to produce faster and more powerful computers, the 
U.S. could no longer control access to the best tools available for 
defense R&D.


    When I begin to worry is when Japan decides to build a better MX
    missle, not a better computer system.  Then issues of scientific
    morals are involved and it's a whole 'nother ballgame.


Supercomputers play a significant role in intelligence and weapons 
resarch in the United States.  I would expect those people who 
subscribe to the view that the U.S. Government should deny high 
technology to its perceived enemies to argue that they ARE "worry[ing]
about the arms race" when they feel threatened by Japan's big 
technology push, and that the issue IS at least qualitatively 
equivalent to Japan's developing better missiles.

                                                asc

p.s. No flames about science and brotherhood, please.  I didn't claim
     to agree with the conservatives whose views I'm attempting to
     describe.  The argument that "Science is a cooperative effort"
     has, BTW, also been voiced freequently in response to NSA's
     recent attempt to control cryptology research in the U.S.

p.p.s.  Perhaps further discussion of the role of Japan's
     supercomputer project in defense applications should be directed to,
     or at least CC'd to, ARMS-D@MIT-MC.

------------------------------

Date: Fri, 12 Aug 83 12:59:34 EDT
From: Brint Cooper (CTAB) <abc@brl-bmd>
Subject: Unprintable

I'm sorry, folks, but all this flaming about 7 December 1941 sounds
too much like old fashioned racism for me.

B. Cooper

------------------------------

Date: 12 Aug 83 16:52:14-PDT (Fri)
From: ihnp4!we13!otuxa!ll1!sb1!sb6!emory!gatech!spaf @ Ucb-Vax
Subject: Sex, religion, words, smoking, farting, and the net
Article-I.D.: gatech.364

It just occurred to me today that most of the discussions going on
about use of genderless pronouns, homosexuals, heterosexuals,
personal habits, religion, and other interesting habits, all have one
point in common when we discuss them -- they're *human*
activities/conditions.

Now stop for a moment and consider the Turing test.  When you read
these messages from other users on the net, how do you know that they
are from people typing at some site rather than some intelligent
program?  I would contend that a good definition of humanity and
intelligence could be formulated by someone looking at the net
traffic.  The rabid flamers and fanatics who condemn and insult would
not meet that definition.

We develop new ideas daily in this field.  A handicapped person is
freed from his or her limitations if they can communicate with the
rest of us at 300 or 1200 baud.  They can stutter, or be mute, they
can be almost completely paralyzed, but their minds and souls are
still alive and free and can communicate with the rest of us.

It doesn't matter if you are male or female, black, red, white,
green, tall, short, old, young, fat, smoking, farting, going 55 mph,
attracted to members of the same sex, attracted to sheep, or any
possible variation of the human condition -- you are a human
intelligence at the other end of my network connection, and I deal
with you in a human manner.  Once you show your lack of tolerance or
your inability to at least try to understand, you show yourself to be
less than human.

Discrimination really means the ability to differentiate amongst
alternatives.  Prejudice and bigotry mean that you discriminate based
on factors which have no real bearing on the choice at hand.  I
believe that the definition of "human intelligence" is that it
implies the ability to discriminate and the inability to be a bigot.

I hope that some of the contributors to the net are simply AI
projects; I would hate to believe that there are people with so much
hate and intolerance as is sometimes expressed.

Comments?

--
The soapbox of Gene Spafford
CSNet:  Spaf @ GATech
ARPA:   Spaf.GATech @ UDel-Relay
uucp:   ...!{sb1,allegra,ut-ngp}!gatech!spaf
        ...!duke!mcnc!msdc!gatech!spaf


[I disagree strongly with any definition of humanity that excludes
flamers and bigots, but this digest is not the place for such a
discussion.  The question of whether intelligence excludes (or
implies) prejudice is more interesting.  We should also be seeking a
replacement for the Turing test that could identify nonhuman
intelligence. -- KIL]

------------------------------

Date: 14 Aug 83 1:12:15-PDT (Sun)
From: harpo!seismo!rlgvax!oz @ Ucb-Vax
Subject: Re: Sex, religion, words, smoking, farting, and the net
Article-I.D.: rlgvax.994

I agree that it would be a shame if there were AI projects that had
such hate and bigotry.  I argue that it WOULD be possible for an AI
project to exhibit the narrowmindedness and stupidity that we
frequently see on the net.  An interesting discussion, Gene, it is
something to ponder.

                                OZ
                                seismo!rlgvax!oz

------------------------------

End of AIList Digest
********************

∂17-Aug-83  1713	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #41
Received: from SRI-AI by SU-AI with TCP/SMTP; 17 Aug 83  17:12:52 PDT
Date: Wednesday, August 17, 1983 4:04PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #41
To: AIList@SRI-AI


AIList Digest           Thursday, 18 Aug 1983      Volume 1 : Issue 41

Today's Topics:
  Expert Systems - Rog-O-Matic,
  Programming Languages - LOGLisp & NETL & Functional Programming,
  Computational Complexity - Time/Space Tradeoff & Humor
----------------------------------------------------------------------

Date: Tuesday, 16 August 1983 21:20:38 EDT
From: Michael.Mauldin@CMU-CS-CAD
Subject: Rog-O-Matic paper request


People of NetLand, the long awaited day of liberation has arrived!
Throw off the shackles of ignorance and see what secrets of
technology have been laid bare through the agency of a free
university press, unbridled by the harsh realities of economic
competition!

                The Rog-O-Paper is here!

For a copy of CMU Technical Report CMU-CS-83-144, entitled
"Rog-O-Matic: A Belligerent Expert System", please send your physical
address to

                Mauldin@CMU-CS-A

and include the phrase "paper request" in the subject line.


For those who have a copy of the draft, the final version contains
two more figures, expanded descriptions of some algorithms, and an
updated discussion of Rog-O-Matic's performance, including
improvements made since February.  And even if you don't have a copy
of the draft, the final version still contains two more diagrams,
expanded descriptions of some algorithms, and an updated discussion
of performance.  The history of the program's development is also
chronicled.

The source is still available by either FTP or can be mailed in
several pieces.  It is about a third of a megabyte of characters, and
is mailed in pieces either 70K or 40K characters long.

Michael Mauldin (Fuzzy)
Computer Science Department
Carnegie-Mellon University
Pittsburgh, PA  15213


                     CMU-CS-83-144      Abstract

      Rog-O-Matic is an unusual combination  of  algorithmic and
      production  systems programming techniques which cooperate
      to explore a hostile environment.  This environment is the
      computer game  Rogue,  which offers several advantages for
      studying  exploration  tasks.   This  paper  presents  the
      major features of the Rog-O-Matic  system,  the  types  of
      knowledge  sources  and   rules   used   to   control  the
      exploration,  and  compares  the performance of the system
      with human Rogue players.

------------------------------

Date: Tue 16 Aug 83 22:56:27-CDT
From: Donald Blais <CC.BLAIS@UTEXAS-20.ARPA>
Subject: LOGLisp language query

In the July 1983 issue of DATAMATION, Larry R. Harris states that the
logic programming language LOGLisp has recently been developed by
Robinson.  What sources can I go to for additional information on this
language?

-- Donald

------------------------------

Date: Wed, 17 Aug 83 04:25 PDT
From: "Glasser Alan"@LLL-MFE.ARPA
Subject: Scott Fahlmann's NETL

I've read a book by Scott Fahlmann about a system called NETL for 
representing knowledge in terms of a particular tree-like structure.  
I found it a fascinating idea.  It was published in 1979.  When I last
heard about it, there were plans to develop some hardware to implement
the concept.  Does anyone know what's been happening on this front?
                              Alan Glasser (glasser@lll-mfe)

------------------------------

Date: 15 Aug 83 22:44:27-PDT (Mon)
From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
Subject: Re: FP and AI - (nf)
Article-I.D.: uiucdcs.2574

Having also worked with both FP and AI systems I basically agree with
your perceptions of their respective goals and functions, but I think
that we can have both, since they operate at different levels: Think
of a powerful, functional language that underlies the majority of the
work in AI data and procedural representations, and imagine what the
world would be like if it were pure (but still powerful).

Besides the "garbage collector" running now and then, there could,
given the mathematical foundations of FP systems, also be an
"efficiency expert" hanging around to tighten up your sloppy code.

Jordan Pollack
University of Illinois
...!pur-ee!uiucdcs!uicsl!pollack

P.S. There is a recent paper by Lenat from Rand called "Cognitive
Economy" which discusses some possible advances in computing
environment maintenance; I don't recall it being linked to FP
systems, however.

------------------------------

Date: 16 Aug 83 20:33:29 EDT  (Tue)
From: Mark Weiser <mark%umcp-cs@UDel-Relay>
Subject: maximum speed

This *maximum* time business needs further ground rules if we are to
discuss it here (which we probably shouldn't).  For instance, the
argument that communication and multiplcation paths don't matter in an
nxn matrix multiply, but that the limiting step is the summation of n
numbers, seems to allow too much power in specifying components.  I am
allowed unboundedly many processors and communication paths, but only
a tree of adders?  I can build you a circuit that will add n numbers
simultaneously, so that means the *maximum* speed of an nxn matrix
multiply is constant.  But it just ain't so.  As n grows larger and
larger and larger the communication paths and the addition circuitry 
will also either grow and grow and grow, or the algorithm will slow
down.  Good old time-space tradeoff.

        (Another time-space tradeoff for matrix multiply on digital
computers:  just remember all the answers and look them up in ROM.
Result: constant time matrix multiply for bounded n.)

------------------------------

Date: 16 Aug 1983 2016-MDT
From: William Galway <Galway@UTAH-20>
Subject: NP-completeness and parallelism, humor

Perhaps AI-digest readers will be amused by the following
article.  I believe it's by Danny Cohen, and appears in the
proceedings of the CMU Conference on VLSI Systems and
Computations, pages 124-125, but this copy was dug out of local
archives.

..................................................................

                      The VLSI Approach to
                    Computational Complexity

                      Professor J. Finnegan
                 University of Oceanview, Kansas
             (Formerly with the DP department of the
               First National Bank of Oceanview)]

The rapid advance of  VLSI and the trend  toward the decrease  of
the geometrical  feature  size,  through the  submicron  and  the
subnano to the subpico, and beyond, have dramatically reduced the
cost  of  VLSI  circuitry.   As  a  result,  many   traditionally
unsolvable problems  can now  (or  will in  the near  future)  be
easily implemented using VLSI technology.

For example, consider the  traveling salesman problem, where  the
optimal sequence of N nodes ("cities") has to be found.   Instead
of  applying  sophisticated   mathematical  tools  that   require
investment in human thinking, which because of the rising cost of
labor  is  economically  unattractive,  VLSI  technology  can  be
applied to  construct  a  simple  machine  that  will  solve  the
problem!

The traveling salesman problem is considered difficult because of
the requirement  of finding  the best  route out  of N!  possible
ones.  A conventional single processor would require O(N!)  time,
but with clever use of VLSI technology this problem can easily be
solved in polynomial time!!

The solution is obtained with a simple VLSI array having only  N!
processors.  Each  processor is  dedicated to  a single  possible
route that  corresponds  to  a certain  permutation  of  the  set
[1,2,3,..N].  The time to load the distance matrix and to  select
the shortest  route(s)  is  only  polynomial  in  N.   Since  the
evaluation of  each route  is  linear in  N, the  entire  system
solves the problem in just polynomial time! Q.E.D.

Readers familiar only with conventional computer architecture may
wrongly suspect  that  the  communication between  all  of  these
processors is too expensive (in area).  However, with the use  of
wireless communication this problem is easily solved without  the
traditional, conventional area penalty.   If the system fails  to
obtain  from  the  FCC  the  required  permit  to  operate  in  a
reasonable  domain  of  the  frequency  spectrum,  it  is  always
possible to  use  microlasers and  picolasers  for  communicating
either through a light-conducting  substrate (e.g.  sapphire)  or
through a convex light-reflecting surface mounted parallel to the
device.   The  CSMA/CD  (Carrier  Sense  Multiple  Access,   with
Collision Detection) communication  technology, developed in  the
early seventies,  may  be found  to  be most  helpful  for  these
applications.

If it is necessary to  solve a problem with  a larger N than  the
one for which the system  was initially designed, one can  simply
design another system for that particular  value of N, or even  a
larger  one,  in  anticipation   of  future  requirements.    The
advancement of  VLSI  technology  makes  this  iterative  process
feasible and attractive.

This approach is not new.  In the early eighties many researchers
discovered the possibility of  accelerating the solution of  many
NP-complete problems by a simple  application of systems with  an
exponential number of processors.

Even earlier, in  the late seventies  many scientists  discovered
that problems with polynomial complexity could also be solved  in
lower time (than  the complexity) by  using number of  processors
which  is  also  a  polynomial  function  of  the  problem  size,
typically of  a  lower  degree.   NxN  matrix  multiplication  by
systems with N↑2 processors used to  be a very popular topic  for
conversations and  conference papers,  even though  less  popular
among system builders.  The requirement of dealing the variable N
was (we believe)  handled by  the simple  P/O technique,  namely,
buying a new system for any other value of N, whenever needed.

According to the most  popular model of those  days, the cost  of
VLSI processors decreases  exponentially.  Hence the  application
of an exponential number  of processors does  not cause any  cost
increase, and  the application  of only  a polynomial  number  of
processors results in a substantial cost saving!!  The fact  that
the former exponential decrease refers  to calendar time and  the
latter to problem size probably has no bearing on this discussion
and should be ignored.

The famous Moore model of exponential cost decrease was based  on
plotting the time  trend (as has  been observed in  the past)  on
semilogarithmic scale.   For that  reason  this model  failed  to
predict the present  as seen  today.  Had  the same  observations
been plotted on a simple linear  scale, it would be obvious  that
the cost of VLSI processors is already (or about to be) negative.
This must be the case, or else there is no way to explain why  so
many researchers  design systems  with an  exponential number  of
processors and compete  for solving  the same  problem with  more
processors.

CONCLUSIONS

 - With  the  rapid  advances  of  VLSI  technology  anything  is
possible.

- The more VLSI processors in a system, the better the paper.

------------------------------

End of AIList Digest
********************

∂18-Aug-83  1135	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #42
Received: from SRI-AI by SU-AI with TCP/SMTP; 18 Aug 83  11:29:18 PDT
Date: Thursday, August 18, 1983 9:54AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #42
To: AIList@SRI-AI


AIList Digest           Thursday, 18 Aug 1983      Volume 1 : Issue 42

Today's Topics:
  Fifth Generation - National Security,
  Artificial Intelligence - Prejudice & Turing Test
----------------------------------------------------------------------

Date: Tue, 16 Aug 83 13:32:17 EDT
From: Morton A. Hirschberg <mort@brl-bmd>
Subject: AI & Morality

  The human manner has led to all sorts of abuses.  Indeed your latest
series of messages (e.g. Spaf) has offended me.  Maybe he meant
humane?  In any event there is no need to be vulgar to make a point.
Any point.

  There are some of us who work for the US government who are very
aware of the threats of exporting high technology and deeply concerned
about the free exchange of data and information and the benefits of
such exchange.  It is only in recent years and maybe because of the
Japanese that academia has taken a greater interest in areas which
they were unwilling to look at before (current economics also makes
for strange bedfellows). Industry has always had an interest (if for
nothing more than to show us a better? wheel for bigger!  bucks).  We
are in a good position to maintain the military-industrial-university
complex (not sorry if this offends anyone) and get some good work 
done.  Recent government policy may restrict high technology flow so
that you might not even get on that airplane soon.

[...]

Mort

------------------------------

Date: Tue, 16 Aug 83 17:15:24 EDT
From: Joe Buck <buck@NRL-CSS>
Subject: frame theory of prejudice


We've heard on this list that we should consider flamers and bigots 
less than human. But doesn't Minsky's frame theory suggest that
prejudice is simply a natural by-product of the way our minds work?
When we enter a new situation, we access a "script" containing default
assumptions about the situation. If the default assumptions are
"sticky" (don't change to agree with newly obtained information), the
result is prejudice.

When I say "doctor", a picture appears in your mind, often quite
detailed, containing default assumptions about sex, age, physical
appearance, etc.  In some people, these assumptions are more firmly
held than in others.  Might some AI programs designed along these
lines show phenomena resembling human prejudice?

                                                Joe Buck
                                                buck@nrl-css

------------------------------

Date: 16 Aug 1983 1437-PDT
From: Jay <JAY@USC-ECLC>
Subject: Turing Test; Parry, Eliza, and Flamer

Parry and Eliza are fairly famous early AI projects.  One acts
paranoid, another acts like an interested analyst.  How about reviving
the project and challenging the Turing test?  Flamer is born.

Flamer would read messages from the net and then reply to the 
sender/bboard denying all the person said, insulting him, and in 
general making unsupported statements.  I suggest some researchers out
there make such a program and put it on the net.  The goal would be 
for the readers of the net try to detect the Flamer, and for Flamer to
escape detection.  If the Flamer is not discovered, then it could be 
considered to have passed the Turing test.

Flamer has the advantage of being able to take a few days in 
formulating a reply; it could consult many related online sources, it
could request information concerning the subject from experts (human,
or otherwise), it could perform statistical analysis of other flames
to make appropriate word choices, it could make common errors 
(gramical, syntactical, or styleistical), and it could perform other 
complex computations.

Perhaps Flamer is already out there, and perhaps this message is 
generated by such a program.

j'

------------------------------

Date: 16 Aug 83 20:57:20 EDT  (Tue)
From: Mark Weiser <mark%umcp-cs@UDel-Relay>
Subject: artificially intelligent bigots.

I agree that bigotry and intelligence exclude each other.  An
Eliza-like bigotry program would be simple in direct proportion to its
bigotry.

------------------------------

Date: 15 Aug 83 20:05:24-PDT (Mon)
From: harpo!floyd!vax135!cornell!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: AI Projects on the Net
Article-I.D.: ssc-vax.417


This is a really fun topic.  The problem of the Turing Test is 
enormously difficult and *very* subtle (either that or we're 
overlooking something really obvious).  Now the net provides a
gigantic lab for enterprising researchers to try out their latest
attempts.  So far I have resisted the temptation, since there are more
basic problems to solve first!  The curious thing about an AI project
is that it can be made infinitely complicated (programs are like that;
consider emacs or nroff), certainly enough to simulate any kind of
behavior desired, whether it be bigotry, right-wingism, irascibility,
mysticism, or perhaps even ordinary rational thought.  This has been 
demonstrated by several programs, among them PARRY (simulates 
paranoia), and POLITICS (simulates arguments between ideologues) (mail
me for refs if interested).  So it doesn't appear that there is a way
to detect an AI project, based on any *particular* behavior.

A more productive approach might be to look for the capability to vary
behavior according to circumstances (self-modifiability).  I can note
that all humans appear capable of modifying their behavior, and that
very few AI programs can do so.  However, not all human behavior can
be modified, and much cannot be modified easily.  "Try not to think of
a zebra for the next ten minutes" - humans cannot change their own
thought processes to manage this feat, while an AI program would not
have much problem.  In fact, Lenat's Eurisko system (assuming we can
believe all the claims) has the capability to speed up its own
operation! (it learned that Lisp 'eq' and 'equal' are the same for
atoms, and changed function references in its own code) The ability to
change behavior cannot be a criterion.

So how does one decide?  The question is still open....

                                        stan the leprechaun hacker
                                        ssc-vax!sts (soon utah-cs)

ps I thought about Zeno's Paradox recently - the Greeks (especially 
Archimedes) were about a hair's breadth away from discovering 
calculus, but Zeno had crippled everybody's thinking by making a 
"paradox" where none existed.  Perhaps the Turing Test is like
that....

------------------------------

End of AIList Digest
********************

∂19-Aug-83  1927	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #43
Received: from SRI-AI by SU-AI with TCP/SMTP; 19 Aug 83  19:26:11 PDT
Date: Friday, August 19, 1983 5:26PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #43
To: AIList@SRI-AI


AIList Digest           Saturday, 20 Aug 1983      Volume 1 : Issue 43

Today's Topics:
  Administrivia - Request for Archives,
  Bindings - J. Pearl,
  Programming Languages - Loglisp & LISP CAI Packages,
  Automatic Translation - Lisp to Lisp,
  Knowledge Representation,
  Bibliographies - Sources & AI Journals
----------------------------------------------------------------------

Date: Thu 18 Aug 83 13:19:30-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Archives

I would like to hear from systems people maintaining AIList archives
at their sites.  Please msg AIList-Request@SRI-AI if you have an
online archive that is publicly available and likely to be available
under the same file name(s) for the forseeable future.  Send any
special instructions needed (beyond anonymous FTP).  I will then make
the information available to the list.

                                        -- Ken Laws

------------------------------

Date: Thu, 18 Aug 83 13:50:16 PDT
From: Judea Pearl <f.judea@UCLA-LOCUS>
Subject: change of address

Effective September 1, 1983 and until March 1, 1984 Judea Pearl's 
address will be :

        Judea Pearl
        c/o Faculty of Management
        University of Tel Aviv
        Ramat Aviv, ISRAEL

Dr. Pearl will be returning to UCLA at that time.

------------------------------

Date: Wednesday, 17 Aug 1983 17:52-PDT
From: narain@rand-unix
Subject: Information on Loglisp


You can get Loglisp (language or reports) by writing to J.A. Robinson
or E.E. Sibert at:

      C.I.S.
      313 Link Hall
      Syracuse University
      Syracuse, NY 13210


A paper on LOGLISP also appeared in "Logic Programming" eds. Clark and
Tarnlund, Academic Press 1982.

-- Sanjai

------------------------------

Date: 17 Aug 83 15:19:44-PDT (Wed)
From: decvax!ittvax!dcdwest!benson @ Ucb-Vax
Subject: LISP CAI Packages
Article-I.D.: dcdwest.214

Is there a computer-assisted instructional package for LISP that runs
under 4.1 bsd ?  I would appreciate any information available and will
summarize what I learn ( about the package) in net.lang.lisp.

Peter Benson decvax!ittvax!dcdwest!benson

------------------------------

Date: 17-AUG-1983 19:27
From: SHRAGER%CMU-PSY-A@CMU-CS-PT
Subject: Lisp to Lisp translation again


I'm glad that I didn't have to start this dicussion up this time.
Anyhow, here's a suggestion that I think should be implemented but
which requires a great deal of Lisp community cooperation.  (Oh
dear...perhaps it's dead already!)

Probably the most intracompatible language around (next to TRAC) is
APL.  I've had a great deal of success moving APL workspaces from one 
implementation to another with a minumum of effort.  Now, part of this
has to do with the fact that APL's primatve set can't be extended
easily but if you think about it, the question of exactly how do you
get all the stuff in a workspace from one machine to the other isn't
an easy one to answer.  The special character set makes each machine's
representation a little different and, of course, trying to send the
internal form would be right out!

The APL community solved this rather elegantly: they have a thing
called a "workspace interchange standard" which is in a canonical code
whose first 256 bytes are the atomic vector (character codes) for the
source machine, etc.  The beauty of this canconical representation
isn't just that it exists, but rather that the translation to and from
this code is the RESPONSIBILITY OF THE LOCAL IMPLEMENTOR!  That is,
for example, if I write a program in Franz and someone at Xerox wants
it, I run it through our local workspace outgoing translator which
puts it into the standard form and then I ship them that (presumably
messy) version.  They have a compatible ingoing translator which takes
certain combinations of constructs and translates them to InterLisp.

Now, of course, this isn't all that easy.  First we'd have to agree on
a standard but that's not so bad.  Most of the difficulty in deciding
on a standard Lisp is taste and that has nothing to do with the form
of the standard since no human ever writes in it.  Another difficulty
(here I am endebted to Ken Laws) is that many things have impure
semantics and so cannot be cleanly translated into another form --
take, for example, the spaghetti stack (please!). Anyhow, I never said
it would be easy but I don't think that it's all that difficult either
-- certainly it's easier than the automatic programming problem.

I'll bet this would make a very interesting dissertation for some
bright young Lisp hacker.  But the difficult part isn't any particular
translator.  Each is hand tailored by the implementors/supporters of a
particular lisp system. The difficult part is getting the Lisp world
to follow the example of a computing success, as, I think, the APL
world has shown workspace interchange to be.

------------------------------

Date: 18 Aug 83 15:31:18-PDT (Thu)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Knowledge Representation, Programming Styles
Article-I.D.: ssc-vax.437

Actually trees can be expressed as attribute-value pairs.  Have had to
do that to get around certain %(&↑%$* OPS5 limitations, so it's 
possible, but not pretty.  However, many times your algebraic/tree 
expressions/structures have duplicated components, in which case you
would like to join two nodes at lower levels.  You then end up with a
directed structure only.  (This is also a solution for multiple
inheritance problems.)

I'll refrain from flaming about traditional (including logic)
grammars.  I'm tired of people insisting on a restricted view of
language that claims that grammar rules are the ultimate description
of syntax (semantics being irrelevant) and that idioms are irritating 
special cases.  I might note that we have basically solved the
language analysis problem (using a version of Berkeley's Phrase
Analysis that handles ambiguity) and are now working on building a
language learner to speed up the knowledge acquisition process, as
well as other interesting projects.

I don't recall a von Neumann bottleneck in AI programs, at least not 
of the kind Backus was talking about.  The main bottleneck seems to be
of a conceptual rather than a hardware nature.  After all, production 
systems are not inherently bottlenecked, but nobody really knows how 
to make them run concurrently, or exactly what to do with the results 
(I have some ideas though).

                                        stan the lep hack
                                        ssc-vax!sts (soon utah-cs)

------------------------------

Date: 16 Aug 83 10:43:54-PDT (Tue)
From: ihnp4!ihuxo!fcy @ Ucb-Vax
Subject: How does one obtain university technical reports?
Article-I.D.: ihuxo.276

I think the bibliographies being posted to the net are great.  I'd 
like to follow up on some of the references, but I don't know where to
obtain copies for many of them.  Is there some standard protocol and
contact point for requesting copies of technical reports from 
universities?  Is there a service company somewhere from which one 
could order such publications with limited distribution?

                        Curiously,

                        Fred Yankowski
                        Bell Labs Rm 6B-216
                        Naperville, IL
                        ihnp4!ihuxo!fcy


[I published all the addresses I know in V1 #8, May 22.  Two that
might be of help are:

    National Technical Information Service
    5285 Port Royal Road
    Springfield, Virginia  22161

    University Microfilms
    300 North Zeeb Road
    Ann Arbor, MI  48106

You might be able to get ordering information for many sources
through your corporate or public library.  You could also contact
LIBRARY@SCORE; I'm sure Richard Manuck  would be willing to help.
If all else fails, put out a call for help through AIList. -- KIL]

------------------------------

Date: 17 Aug 83 1:14:51-PDT (Wed)
From: decvax!genrad!mit-eddie!gumby @ Ucb-Vax
Subject: Re: How does one obtain university technical reports?
Article-I.D.: mit-eddi.616

Bizarrely enough, MIT and Stanford AI memos were recently issued by 
some company on MICROFILM (!) for some exorbitant price.  This price 
supposedly gives you all of them plus an introduction by Marvin
Minsky.  They advertised in Scientific American a few months ago.  I
guess this is a good deal for large institutions like Bell, but
smaller places are unlikely to have a microfilm (or was it fiche)
reader.

MIT AI TR's and memos can be obtained from Publications, MIT AI Lab, 
8th floor, 545 Technology Square, Cambridge, MA 02139.


[See AI Magazine, Vol. 4, No. 1, Winter-Spring 1983, pp. 19-22, for 
Marvin Minsky's "Introduction to the COMTEX Microfiche Edition of the
Early MIT Artificial Intelligence Memos".  An ad on p. 18 offers the
set for $2450.  -- KIL]

------------------------------

Date: 17 Aug 83 10:11:33-PDT (Wed)
From: harpo!eagle!mhuxt!mhuxi!mhuxa!ulysses!cbosgd!cbscd5!lvc @
      Ucb-Vax
Subject: List of AI Journals
Article-I.D.: cbscd5.419

Here is the list of AI journals that I was able to put together from
the generous contributions of several readers.  Sorry about the delay.
Most of the addresses, summary descriptions, and phone numbers for the
journals were obtained from "The Standard Periodical Directory"
published by Oxbridge Communications Inc.  183 Madison Avenue, Suite
1108 New York, NY 10016 (212) 689-8524.  Other sources you may wish to
try are Ulrich's International Periodicals Directory, and Ayer
Directory of Publications.  These three reference books should be
available in most libraries.

*************************
AI Journals and Magazines 
*************************

------------------------------
AI Magazine
        American Association for Artificial Intelligence
        445 Burgess Drive
        Menlo Park, CA 94025
        (415) 328-3123
        AAAI-OFFICE@SUMEX-AIM
        Quarterly, $25/year, $15 Student, $100 Academic/Corporate
------------------------------
Artificial Intelligence
        Elsevier Science Publishers B.V. (North-Holland)
        P.O. Box 211
        1000 AE Amsterdam, The Netherlands
        About 8 issues/year, 880 Df. (approx. $352)
------------------------------
American Journal of Computational Linguistics
        Donald E. Walker
        SRI International
        333 Ravenswood Avenue
        Menlo Park, CA 94025
        (415) 859-3071
        Quarterly, individual ACL members $15/year, institutions $30.
------------------------------
Robotics Age
        Robotics Publishing Corp.
        174 Concord St., Peterborough NH 03458 (603) 924-7136
        Technical articles related to design and implementation of
        intelligent machine systems
        Bimonthly, No price quoted
------------------------------
SIGART Newsletter
        Association for Computing Machinery
        11 W. 42nd St., 3rd fl.
	New York NY 10036
	(212) 869-7440
        Artificial intelligence, news, report, abstracts, educa-
        tional material, etc.  Book reviews.
        Bimonthly $12/year, $3/copy
------------------------------
Cognitive Science
        Ablex Publishing Corp.
        355 Chestnut St.
	Norwood NJ 07648
	(201) 767-8450
        Articles devoted to the emerging fields of cognitive
        psychology and artificial intelligence.
        Quarterly $22/year
------------------------------
International Journal of Man Machine Studies
        Academic Press Inc.
        111 Fifth Avenue
	New York NY 10013
	(212) 741-4000
        No description given.
        Quarterly $26.50/year
------------------------------
IEEE Transactions on Pattern Analysis and Machine Intelligence
        IEEE Computer Society
        10662 Los Vaqueros Circles,
	Los Alamitos CA 90720
	(714) 821-8380
        Technical papers dealing with advancements in artificial
        machine intelligence
        Bimonthly $70/year, $12/copy
------------------------------
Behavioral and Brain Sciences
        Cambridge University Press
        32 East 57th St.
	New York NY 10022
	(212) 688-8885
        Scientific form of research in areas of psychology,
	neuroscience, behavioral biology, and cognitive science,
	continuing open peer commentary is published in each issue
        Quarterly $95/year, $27/copy
------------------------------
Pattern Recognition
        Pergamon Press Inc.
        Maxwell House, Fairview Park
        Elmsford NY 10523
	(914) 592-7700
        Official journal of the Pattern Recognition Society
        Bimonthly $170/year, $29/copy
------------------------------

************************************
Other journals of possible interest.
************************************

------------------------------
Brain and Cognition
        Academic Press
        111 Fifth Avenue
	New York NY 10003
	(212) 741-6800
        The latest research in the nonlinguistic aspects of neuro-
        psychology.
        Quarterly $45/year
------------------------------
Brain and Language
        Academic Press, Journal Subscription
        111 Fifth Avenue
	New York NY 10003
	(212) 741-6800
        No description given.
        Quarterly $30/year
------------------------------
Human Intelligence
        P.O. Box 1163
        Birmingham MI 48012
	(313) 642-3104
        Explores the research and application of ideas on human
	intelligence.
        Bimonthly newsletter - No price quoted.
------------------------------
Intelligence
        Ablex Publishing Corp.
        355 Chestnut St.
	Norwood NJ 07648
	(201) 767-8450
        Original research, theoretical studies and review papers
        contributing to understanding of intelligence.
        Quarterly $20/year
------------------------------
Journal of the Assn. for the Study of Perception
        P.O. Box 744
	DeKalb IL 60115
        No description given.
        Semiannually $6/year
------------------------------
Computational Linguistics and Computer Languages
        Humanities Press
        Atlantic Highlands NJ 07716
	(201) 872-1441
        Articles deal with syntactic and semantic of [missing word]
        languages relating to math and computer science, primarily
        those which summarize, survey, and evaluate.
        Semimonthly $46.50/year
------------------------------
Annual Review in Automatic Programming
        Maxwell House, Fairview Park
        Elmsford NY 10523
	(914) 592-7700
        A comprehensive treatment of some major topics selected
        for their current importance.
        Annual $57/year
------------------------------
Computer
        IEEE Computer Society
        10662 Los Vaqueros Circle
        Los Alamitos, CA 90720
        (714) 821-8380
        Monthly, $6/copy, free with Computer Society Membership
------------------------------
Communications of the ACM
        Association for Computing Machinery
        11 West 42nd Street
        New York, NY 10036
        Monthly, $65/year, free with membership ($50, $15 student)
------------------------------
Journal of the ACM
        Association for Computing Machinery
        11 West 42nd Street
        New York, NY 10036
        Computer science, including some game theory,
        search, foundations of AI
        Quarterly, $10/year for members, $50 for nonmembers
------------------------------
Cognition
        Associated Scientific Publishers b.v.
        P.O. Box 211
        1000 AE Amsterdam, The Netherlands
        Theoretical and experimental studies of the mind, book reviews
        Bimonthly, 140 Df./year (~ $56), 240 Df. institutional
------------------------------
Cognitive Psychology
        Academic Press
        111 Fifth Avenue
        New York, NY 10003
        Quarterly, $74 U.S., $87 elsewhere
------------------------------
Robotics Today
        Robotics Today
        One SME Drive
        P.O. Box 930
        Dearborn, MI 48121
        Robotics in Manufacturing
        Bimonthly, $36/year unless member of SME or RIA
------------------------------
Computer Vision, Graphics, and Image Processing
        Academic Press
        111 Fifth Avenue
        New York, NY 10003
        $260/year U.S. and Canada, $295 elsewhere
------------------------------
Speech Technology
        Media Dimensions, Inc.
        525 East 82nd Street
        New York, NY 10028
        (212) 680-6451
        Man/machine voice communications
        Quarterly, $50/year
------------------------------

*******************************
    Names, but no addresses
*******************************

        Magazines
        --------

AISB Newsletter

        Proceedings
        ←←←←←←←←←←

IJCAI	International Joint Conference on AI
AAAI	American Association for Artificial Intelligence
TINLAP	Theoretical Issues in Natural Language Processing
ACL	Association of Computational Linguistics
AIM	AI in Medicine
MLW	Machine Learning Workshop
CVPR	Computer Vision and Pattern Recognition (formerly PRIP)
PR	Pattern Recognition
IUW	Image Understanding Workshop (DARPA)
T&A	Trends and Applications (IEEE, NBS)
DADCM	Workshop on Data Abstraction, Databases, and Conceptual Modeling
CogSci	Cognitive Science Society
EAIC	European AI Conference


Thanks again to all that contributed.

Larry Cipriani
cbosgd!cbscd5!lvc

------------------------------

End of AIList Digest
********************

∂22-Aug-83  1145	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #44
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Aug 83  11:41:46 PDT
Date: Monday, August 22, 1983 9:39AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #44
To: AIList@SRI-AI


AIList Digest            Monday, 22 Aug 1983       Volume 1 : Issue 44

Today's Topics:
  AI Architecture - Parallel Processor Request,
  Computational Complexity - Maximum Speed,
  Functional Programming,
  Concurrency - Production Systems & Hardware,
  Programming Languages - NETL
----------------------------------------------------------------------

Date: 18 Aug 83 17:30:43-PDT (Thu)
From: decvax!linus!philabs!sdcsvax!noscvax!revc @ Ucb-Vax
Subject: Looking for parallel processor systems
Article-I.D.: noscvax.182

We have been looking into systems to replace our current ANALOG
computers.  They are the central component in a real time simulation
system.  To date, the only system we've seen that looks like it might
do the job is the Zmob system being built at the Univ. of Md (Mark
Weiser).

I would appreciate it if you could supply me with pointers to other
systems that might support high speed, high quality, parallel
processing.

Note: most High Speed networks are just too slow and we can't justify
a Cray-1.

Bob Van Cleef

uucp : {decvax!ucbvax || philabs}!sdcsvax!nosc!revc arpa : revc@nosc 
CompuServe : 71565,533

------------------------------

Date: 19 Aug 83 20:29:13-PDT (Fri)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: maximum speed
Article-I.D.: ssc-vax.445

Hmmm, I didn't know that addition of n numbers could be performed 
simultaneously - ok then, constant time matrix multiplication, given 
enough processors.  I still haven't seen any hard data on limits to
speed because of communications problems.  If it seems like there are
limits but you can't prove it, then maybe you haven't discovered the
cleverest way to do it yet...

                                        stan the lep hack
                                        ssc-vax!sts (soon utah-cs)

ps The space cost of constant or log time matrix mults is of course
   ridiculous

pps Perhaps this should move to net.applic?

------------------------------

Date: Fri, 19 Aug 83 15:08:15 EDT
From: Paul Broome (CTAB) <broome@brl-bmd>
Subject: Re: Functional programming and AI

Stan,

Let me climb into my pulpit and respond to your FP/AI prod.  I don't 
think FP and AI are diametrically opposed.  To refresh everyone's
memory here are some of your comments.


        ...  Having worked with both AI and FP languages,
        it seems to me that the two are diametrically
        opposed to one another.  The ultimate goal of functional
        programming language research is to produce a language that
        is as clean and free of side effects as possible; one whose
        semantic definition fits on a single side of an 8 1/2 x 11
        sheet of paper ...

Looking at Backus' Turing award lecture, I'd have to say that
cleanliness and freedom of side effects are two of Backus' goals but
certainly not succinctness of definition.  In fact Backus says (CACM,
Aug.  78, p. 620), "Our intention is to provide FP systems with widely
useful and powerful primitive functions rather than weak ones that 
could then be used to define useful ones."

Although FP has no side effects, Backus also talked about applicative 
state systems (AST) with one top level change of state per
computation,i.e.  one side effect.  The world of expressions is a
nice, orderly one; the world of statements has all the mush.  He's
trying to move the statement part out of the way.

I'd have to say one important part of the research in FP systems is to
define and examine functional forms (program forming operations) with 
nice mathematical properties.  A good way to incorporate (read 
implement) a mathematical concept in a computer program is without 
side effects.  This side effect freeness is nice because it means that
a program is 'referentially transparent', i.e. it can be used without
concern about collision with internal names or memory locations AND
the program is dependable; it always does the same thing.

A second nice thing about applicative languages is that they are
appropriate for parallel execution.  In a shared memory model of
computation (e.g. Ada) it's very difficult (NP-complete, see CACM, a
couple of months ago) to tell if there is collision between
processors, i.e. is a processor overwriting data that another
processor needs.


        On the other hand, the goal of AI research (at least in the
        AI language area) is to produce languages that can effectively
        work with as tangled and complicated representations of
        knowledge as possible.  Languages for semantic nets, frames,
        production systems, etc, all have this character.

I don't think that's the goal of AI research but I can't offer a
better one at the moment.  (Sometimes it looks as if the goal is to
make money.)

Large, tangled structures can be handled in applicative systems but
not efficiently, at least I don't see how.  If you view a database
update as a function mapping the pair (NewData, OldDatabase) into
NewDatabase you have to expect a new database as the returned value.
Conceptionally that's not a problem.  However, operationally there
should just be a minor modification of the original database when
there is no sharing and suspended modification when the database is
being shared.  There are limited transformations that can help but
there is much room for improvement.

An important point in all this is program transformation.  As we build
bigger and smarter systems we widen the gap between the way we think 
and the hardware.  We need to write clear, easy to understand, and 
large-chunked programs but transform them (within the same source 
language) into possibly less clear, but more efficient programs.  
Program transformation is much easier when there are no side effects.

        Now between the Japanese 5th generation project (and the US
        response) and the various projects to build non-vonNeumann
        machines using FP, it looks to me like the seeds of a
        controversy over the best way to do programming.  Should we be
        using FP languages or AI languages?  We can't have it both ways,
        right?  Or can we?

A central issue is efficiency.  The first FORTRAN compiler was viewed
with the same distrust that the public had about computers in general.
Early programmers didn't want to relinquish explicit management of
registers or whatever because they didn't think the compiler could do
as well as they.  Later there was skepticism about garbage collection
and memory management.  A multitude of sins is committed in the name
of (machine) efficiency at the expense of people efficiency.  We
should concern ourselves more with WHAT objects are stored than with
HOW they are stored.

There's no doubt that applicative languages are applicable.  The
Japanese (fortunately for them) are less affected by, as Dijkstra puts
it, "our antimathematical age."  And they, unlike us, are willing to
sacrifice some short term goals for long term goals.


- Paul Broome
  (broome@brl)

------------------------------

Date: 17 Aug 83 17:06:13-PDT (Wed)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: FP and AI - (nf)
Article-I.D.: ssc-vax.427

There *is* a powerful functional language underlying most AI programs
- Lisp!  But it's never pure Lisp.  The realization that got me to
thinking about this was the apparent necessity for list surgery,
sooner or later.  rplaca and allied functions show up in the strangest
places, and seem to be crucial to the proper functioning of many AI
systems (consider inheritance in frames or the construction of a
semantic network; perhaps method combination in flavors qualifies).
I'm not arguing that an FP language could *not* be used to build an AI
language on top; I'm thinking more about fundamental philosophical
differences between different schools of research.

                                        stan the lep hacker
                                        ssc-vax!sts (soon utah-cs)

------------------------------

Date: Sat 20 Aug 83 12:28:17-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: So the language analysis problem has been solved?!?

I will also refrain from flaming, but not from taking to task 
excessive claims.

    I'll refrain from flaming about traditional (including
    logic) grammars.  I'm tired of people insisting on a
    restricted view of language that claims that grammar rules
    are the ultimate description of syntax (semantics being
    irrelevant) and that idioms are irritating special cases.  I
    might note that we have basically solved the language
    analysis problem (using a version of Berkeley's Phrase
    Analysis that handles ambiguity) ...

I would love to test that "solution of the language analysis 
problem"... As for the author being "tired of people insisting on a 
restricted ...", he is just tired of his own straw people, because 
there doesn't seem to be anybody around anymore claiming that 
"semantics is irrelevant".  Formal grammars (logic or otherwise) are 
just a convenient mathematical technique for representing SOME 
regularities in language in a modular and testable form. OF COURSE, a 
formal grammar seen from the PROCEDURAL point of view can be replaced 
by any arbitrary "ball of string" with the same operational semantics.
What this replacement does to modularity, testability and 
reproducibility of results is sadly clear in the large amount of 
published "research" in natural language analysis which is untestable 
and irreproducible. The methodological failure of this approach 
becomes obvious if one considers the analogous proposal of replacing 
the principles and equations of some modern physical theory (general 
relativity, say) by a computer program which computes "solutions" to 
the equations for some unspecified subset of their domain, some of 
these solutions being approximate or plain wrong for some (again 
unspecified) set of cases. Even if such a program were "right" all the
time (in contradiction with all our experience so far), its sheer 
opacity would make it useless as scientific explanation.

Furthermore, when mentioning "semantics", one better say which KIND of
semantics one means. For example, grammar rules fit very well with 
various kinds of truth-theoretic and model-theoretic semantics, so the
comment above cannot be about that kind of semantics. Again, a theory 
of semantics needs to be testable and reproducible, and, I would 
claim, it only qualifies if it allows the representation of a 
potential infinity of situation patterns in a finite way.

    I don't recall a von Neumann bottleneck in AI programs, at
    least not of the kind Backus was talking about.  The main
    bottleneck seems to be of a conceptual rather than a
    hardware nature.  After all, production systems are not
    inherently bottlenecked, but nobody really knows how to make
    them run concurrently, or exactly what to do with the
    results (I have some ideas though).

The reason why nobody knows how to make production systems run 
concurrently is simply because they use a global state and side 
effects. This IS precisely the von Neumann bottleneck, as made clear 
in Backus' article, and is a conceptual limitation with hardware 
consequences rather than a purely hardware limitation. Otherwise, why 
would Backus address the problem by proposing a new LANGUAGE (fp), 
rather than a new computer architecture?  If your AI program was 
written in a language without side effects (such as PURE Prolog), the 
opportunities for parallelism would be there. This would be 
particularly welcome in natural language analysis with logic (or other
formal) grammars, because dealing with more and more complex subsets 
of language needs an increasing number of grammar rules and rules of 
inference, if the results are to be accurate and predictable.  
Analysis times, even if they are polynomial on the size of the input, 
may grow EXPONENTIALLY with the size of the grammar.

                                Fernando Pereira
                                AI Center
                                SRI International
                                pereira@sri-ai

------------------------------

Date: 15 Aug 83 22:44:05-PDT (Mon)
From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
Subject: Re: data flow computers and PS's - (nf)
Article-I.D.: uiucdcs.2573


The nodes in a data-flow machine, in order to compute efficiently,
must be able to do a local computation.  This is why arithmetic or
logical operations are O.K. to distribute.  Your scheme, however,
seems to require that the database of propositions be available to
each node, so that the known facts can be deduced "instantaneously".
This would cause severe problems with the whole idea of concurrency,
because either the database would have to be replicated and passed
through the network, or an elaborate system of memory locks would need
to be established.

The Hearsay system from CMU was one of the early PS's with claims to a
concurrent implementation. There is a paper I remember in IEEE ToC (75
or 76) which discussed the problems of speedup and locks.

Also, I think John Holland (of Michigan?) is currently working on a 
parallel PS machine (but doesn't call it that!)


Jordan Pollack
University of Illinois
...!pur-ee!uiucdcs!uicsl!pollack

------------------------------

Date: 17 Aug 83 16:56:55-PDT (Wed)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: data flow computers and PS's - (nf)
Article-I.D.: ssc-vax.426

A concurrent PS is not too impossible, 'cause I've got one 
(specialized for NL processing and not actually implemented 
concurrently, but certainly capable).  It is true that the working
memory would have to be carefully organized, but that's a matter of
sufficiently clever design; there's no fundamental theoretical
problems.  Traditional approaches won't work, because two concurrently
operating rules may come to contradictory conclusions, both of which
may be valid.  You need a way to store both of these and use them.

                                        stan the leprechaun hacker
                                        ssc-vax!sts (soon utah-cs)

------------------------------

Date: 18 Aug 83 0516 EDT
From: Dave.Touretzky@CMU-CS-A
Subject: NETL

I am a graduate student of Scott Fahlman's, and I've been working on
NETL for the last five years.  There are some interesting lessons to
be learned from the history of the NETL project.  NETL was a
combination of a parallel computer architecture, called a parallel
marker propagation machine, and a representation language that
appeared to fit well on this architecture.  There will probably never
be a hardware implementation of the NETL Machine, although it is
certainly feasible.  Here's why...

The first problem with NETL is its radical semantics:  no one
completely understands their implications.  We (Scott Fahlman, Walter
van Roggen, and I) wrote a paper in IJCAI-81 describing the problems
we had figuring out how exceptions should interact with multiple
inheritance in the IS-A hierarchy and why the original NETL system
handled exceptions incorrectly.  We offered a solution in our paper,
but the solution turned out to be wrong.  When you consider that NETL
contains many features besides exceptions and inheritance, e.g.
contexts, roles, propositional statements, quantifiers, and so on, and
all of these features can interact (!!), so that a role (a "slot" in
frame lingo) may only exist within certain contexts, and have
exceptions to its existence (not its value, which is another matter)
in certain sub-contexts, and may be mapped multiple times because of
the multiple inheritance feature, it becomes clear just how 
complicated the semantics of NETL really is.  KLONE is in a similar 
position, although its semantics are less radical than NETL's.
Fahlman's book contains many simple examples of network notation
coupled with appeals to the reader's intuition; what it doesn't
contain is a precise mathematical definition of the meaning of a NETL
network because no such definition existed at that time.  It wasn't
even clear that a formal definition was necessary, until we began to
appreciate the complexity of the semantic problems.  NETL's operators
are *very* nonstandard; NETL is the best evidence I know of that
semantic networks need not be simply notational variants of logic,
even modal or nonmonotonic logics.

In my thesis (forthcoming) I develop a formal semantics for multiple 
inheritance with exceptions in semantic network languages such as
NETL.  This brings us to the second problem.  If we choose a
reasonable formal semantics for inheritance, then inheritance cannot
be computed on a marker propagation machine, because we need to pass
around more information than is possible on such a limited
architecture.  The algorithms that were supposed to implement NETL on
a marker propagation machine were wrong:  they suffered from race
conditions and other nasty behavior when run on nontrivial networks.
There is a solution called "conditioning" in which the network is
pre-processed on a serial machine by adding enough extra links to
ensure that the marker propagation algorithms always produce correct 
results.  But the need for serial preprocessing removes much of the 
attractiveness of the parallel architecture.

I think the NETL language design stands on its own as a major
contribution to knowledge representation.  It raises fascinating
semantic problems, most of which remain to be solved.  The marker
propagation part doesn't look too promising, though.  Systems with
NETL-like semantics will almost certainly be built in the future, but
I predict they will be built on top of different parallel
architectures.

-- Dave Touretzky

------------------------------

Date: Thu 18 Aug 83 13:46:13-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: NETL and hardware

        In Volume 40 of the AIList Alan Glasser asked about hardware 
implimentations using marker passing a la NETL. The closest hardware I
am aware of is called the Connection Machine, and is begin developed 
at MIT by Alan Bawden, Dave Christman, and Danny Hillis (apologies if
I left someone out). The project involves building a model with about
2↑10 processors. I'm not sure of its current status, though I have
heard that a company is forming to build and market prototype CM's.

        I have heard rumors of the SPICE project at CMU, though I am
not aware of any results pertaining to hardware, the project seems to
have some measure of priority at CMU. Hopefully members of each of
these projects will also send notes to AIList...

David Rogers, DRogers@SUMEX-AIM

------------------------------

Date: Thu, 18 Aug 1983  22:01 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
Subject: NETL


I've only got time for a very quick response to Alan Glasser's query 
about NETL.  Since the book was published we have done the following:

1. Our group at CMU has developed several design sketches for
practical NETL machine implementations of about a million porcessing
elements.  We haven't built one yet, for reasons described below.

2. David B. McDonald has done a Ph.D.thesis on noun group
understanding (things like "glass wine glass") using a NETL-type
network to hold the necessary world knowledge.  (This is available as
a CMU Tech Report.)

3. David Touretzky has done a through logical analysis of NETL-style 
inheritance with exceptions, and is currently writing up his thesis on
this topic.

4. I have been studying the fundamental strengths and limitations of 
NETL-like marker-passing compared to other kinds of massively parallel
computation.  This has gradually led me to prefer an architecture that
passes numers or continuous values to the single-bit marker-passing of
NETL.

For the past couple of years, I've been putting most of my time into
the Common Lisp effort -- a brief foray into tool building that got
out of hand -- and this has delayed any plans to begin work on a NETL
machine.  Now that our Common Lisp is nearly finished, I can think
again about starting a hardware project, but something more exciting
than NETL has come along: the Boltzmann Machine architecture that I am
working on with Geoff Hinton of CMU and Terry Sejnowski of
Johns-Hopkins.  We will be presenting a paper on this at AAAI.

Very briefly, the Boltzmann machine is a massively parallel
architecture in which each piece of knowledge is distributed over many
units, unlike NETL in which concepts are associated with particular
pieces of hardware.  If we can make it work, this has interesting
implications for reliable large-scale implementation, and it is also a
much more plausible model for neural processing than is something like
NETL.

So that's what has happened to NETL.

-- Scott Fahlman (FAHLMAN@CMU-CS-C)

------------------------------

End of AIList Digest
********************

∂22-Aug-83  1347	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #45
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Aug 83  13:46:49 PDT
Date: Monday, August 22, 1983 10:08AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #45
To: AIList@SRI-AI


AIList Digest            Monday, 22 Aug 1983       Volume 1 : Issue 45

Today's Topics:
  Language Translation - Lisp-to-Lisp,
  Programming Languages - Lisps on 68000s and SUNs
----------------------------------------------------------------------

Date: 19 Aug 1983 2113-PDT
From: VANBUER@USC-ECL
Subject: Lisp Interchange Standard

In response to your message sent Friday, August 19, 1983 5:26PM

On Lisp translation via a standard form:

I have used Interlisp Transor a fair amount both into and out of
Interlisp (even experimented with translation to C), and the kind of
thing which makes it very difficult, especially if you want to retain
some efficiency, are subtle differences in what seem to be fairly
standard functions:  e.g. in Interlisp (DREMOVE (CAR X) X) will be EQ
to X (though not EQUAL or course) except in the case the result is
NIL; both CAR and CDR of the lead cell are RPLACed so that all
references to the value of X also see the DREMOVE as a side effect.
In Franz Lisp, the DREMOVE would have the value (CDR X) in most cases,
but no RPLACing is done.  In most cases this isn't a problem, but ....
In APL, at least the majority of the language has the same semantics
in all implementations.
        Darrel J. Van Buer, SDC

------------------------------

Date: 20 Aug 1983 1226-PDT
From: FC01@USC-ECL
Subject: Re: Language Translation

I like the APL person's [Shrager's] point of view on translation.
The problem seems to be that APL has all the things it needs in its
primative functions. Lisp implementers have seen fit to impurify
their language by adding so much fancy stuff that they depend on so
heavily.  If every lisp program were translated into lisp 1.5 (or
so), it would be easy to port things, but it would end in
innefficient implementations.  I like APL, in fact, I like it so much
I've begun maintaining it on our unix system. I've fixed several
bugs, and it now seems to work very well.  It has everything any
other APL has, but nobody seems to want to use it except me. I write
simulators in a day, adaptive networks in a week, and analyze
matrices in seconds. So at any rate, anyone who is interested in APL
on the VAX - especially for machine intelligence applications please
get in touch with me. It's not ludicrous by the way, IBM does more
internal R+D in APL than in any other language! That includes their
robotics programs where they do lots of ARM solutions (matrix
manipulation being built into APL has tremendous advantages in this
domain).

FLAME ON!
[I believe this refers the Stan the Leprechaun's submission in
V1 #43. -- KIL]

So if your language translation program is the last word in
translators, how come it's not in the journals? How come nobody knows 
that it solves all the problems of translation? How come you haven't
made a lot of money selling COBOL to PASCAL to C to APL to LISP to
ASSEMBLER to BASIC to ... translators in the open market? Is it that
it only works for limited cases? Is it that it only deals with
'natural' languages? Is it really as good as you think, or do you only
think it's really good?  How about sharing your (hopefully non
NPcomplete) solution to an NP complete problem with the rest of us!  
FLAME OFF!

[...]
                Fred

------------------------------

Date: Sat 20 Aug 83 15:18:13-PDT
From: Mabry Tyson <Tyson@SRI-AI.ARPA>
Subject: Lisp-to-Lisp translation

Some of the comments on Lisp-to-Lisp translation seem to be rather 
naive.  Translating code that works on pure S-expressions is usually 
not too difficult.  However, Lisp is not pure Lisp.

I am presently translating some code from Interlisp to Zetalisp (from
a Dec-20 to a Symbolics 3600) and thought a few comments might be
appropriate.  First off, Interlisp has TRANSOR which is a package to
translate between Lisps and is programmable.  It isn't used often but
it does some of the basic translations.  There is an Interlisp
Compatability Package(ILCP) on the 3600, which when combined with a
CONVERT program to translate from Interlisp (running in Interlisp),
covers a fair amount of Interlisp.  (Unfortunately it is still early
in its development - I just rewrote all the I/O functions because they
didn't work for me.)

Even with these aids there are lots of problems.  Here are a few
examples I have come across:  In the source language, taking the CAR
of an atom did not cause an error.  Apparently laziness prevented the
author from writing code to check whether some input was an atom
(which was legal input) before seeing if the CAR of it was some
special symbol.

Since Interlisp-10 is short of cons-cell room, many relatively obscure
pieces of code were designed to use few conses.  Thus the author used 
and reused scratch lists and scratch strings.  The exact effect
couldn't be duplicated.  In particular, he would put characters into
specific spots in the scratch string and then would collect the whole
string.  (I'm translating this into arrays.)

All the I/O has to be changed around.  The program used screen control
characters to do fancy I/O on the screen.  It just printed the right
string to go to whereever it wanted.  You can't print a string on the
3600 to do that.  Also, whether you get an end-of-line character at
the end of input is different (so I have to hand patch code that did a
(RATOM) (READC)).  And of course file names (as well as the default
part of them, ie., the directory) are all different.

Then there are little differences which the compatability package can
take care of but introduce inefficiencies.  For instance, the function
which returns the first position of a character in a string is
different between the two lisps because the values returned are off by
1.  So, where the author of the program used that function just to
determine whether the character was in the string is now computing the
position and then offsetting it by 1.

The ILCP does have a nice advantage of letting me use the Interlisp 
name for functions even though there is a similarly named, but
different, function in Zetalisp.

Unfortunately for me, this code is going to be continued to be
developed on the Dec-20 while we want to get the same code up on the
3600.  So I have to try to set it up so the translation can happen
often rather than just once.  That means going back to the Interlisp
code and putting it into shape so that a minimum amount of
hand-patching need be done.

------------------------------

Date: 19 Aug 83 10:52:11-PDT (Fri)
From: harpo!eagle!allegra!jdd @ Ucb-Vax
Subject: Lisps on 68000's
Article-I.D.: allegra.1760

A while ago I posted a query about Lisps on 68000's.  I got
essentially zero replies, so let me post what I know and see whether
anyone can add to it.

First, Franz Lisp is being ported from the VAX to 68000's.  However,
the ratio of random rumors to solid facts concerning this undertaking
seems the greatest since the imminent availability of NIL.  Moreover,
I don't really like Franz; it has too many seams showing (I've had too
many programs die without warning from segmentation errors and the
like).

Then there's T.  T sounds good, but the people who are saying it's
great are the same ones trying to sell it to me for several thousand
dollars, so I'd like to get some more disinterested opinions first.
The only person I've talked to said it was awful, but he admits he
used an early version.

I have no special knowledge of PSL, particularly of the user
environment or of how useful or standard its dialect looks, nor of the
status of its 68000 version.

As for an eventual Common Lisp on a 68000, well, who knows?

There are also numerous toy systems floating around, but none I would 
consider for serious work.

Well, that's about everything I know; can any correct me or add to the
list?

Cheers,
John ("Don't Wanna Program in C") DeTreville
Bell Labs, Murray Hill

[I will reprint some of the recent Info-Grpahics discussion of SUNs
and other workstations as LISP-based graphics servers.  Several of
the comments relate to John's query.  -- KIL]

------------------------------

Date: Fri, 5 Aug 83 21:30:22 PDT
From: fateman%ucbkim@Berkeley (Richard Fateman)
Subject: SUNs, 3600s, and Lisp

         [Reprinted from the Info-Graphics discussion list.]

[...]

In answer to Fred's original query, (I replied to him personally
earlier ), Franz has been running on a SUN since January, 1983.  We
find it runs Lisp faster than a VAX 750, and with expected performance
improvements, may be close to a VAX 780. (= about 2.5 to 4 times
slower than a KL-10).  This makes it almost irrelevant using Franz on
a VAX.  Yet more specifically in answer to FRD's question, Franz on
the SUN has full access to the graphics software on it, and one could
set up inter-process communication between a Franz on a VAX and
something else (e.g. Franz) on a SUN. A system for shipping smalltalk
pictures to SUNs runs at UCB.

  Franz runs on other 68000 UNIX workstations, including Pixel, Dual,
and Apple Lisa.  Both Interlisp-D and MIT LispMachine Lisp have more 
highly developed graphics stuff at the moment.

  As far as other lisps, I would expect PSL and T, which run on Apollo
Domain 68000 systems, to be portable towards the SUN, and I would not
be surprised if other systems turn up.  For the moment though, Franz
seems to be alone.  Most programs run on the SUN without change (e.g.
Macsyma).

------------------------------

Date: Sat 6 Aug 83 13:39:13-PDT
From: Bill Nowicki <NOWICKI@SU-SCORE.ARPA>
Subject: Re: LISP & SUNs ...

         [Reprinted from the Info-Graphics discussion list.]

You can certainly run Franz under Unix from SMI, but it is SLOW.  Most
Lisps are still memory hogs, so as was pointed out, you need a
$100,000 Lisp machine to get decent response.

If $100,000 is too much for you to spend on each programmer, you might
want to look at what we are doing on the fourth floor here at
Stanford.  We are running a small real-time kernel in a cheap, quiet,
diskless SUN, which talks over the network to various servers.  Bill
Yeager of Sumex has written a package which runs under interLisp and
talks to our Virtual Graphics Terminal Service.  InterLisp can be run
on VAX/Unix or VAX/VMS systems, TOPS-20, or Xerox D machines.  The
cost/performance ratio is very good, since each workstation only needs
256K of memory, frame buffer, CPU, and Ethernet interface, while the 
DECSystem-20 or VAX has 8M bytes and incredibly fast system 
performance (albeit shared between 20 users).

We are also considering both PSL and T since they already have 68000
compilers.  I don't know how this discussion got on Info-Graphics.

        -- Bill

------------------------------

Date: 6 Aug 1983 1936-MDT
From: JW-Peterson@UTAH-20 (John W. Peterson)
Subject: Lisp Machines

         [Reprinted from the Info-Graphics discussion list.]

Folks who don't have >$60K to spend on a Lisp Machine may want to
consider Utah's Portable Standard Lisp (PSL) running on the Apollo 
workstation.  Apollo PSL has been distributed for several months now.
PSL is a full Lisp implementation, complete with a 68000 Lisp
compiler.  The standard distribution also comes with a wide range of
utilities.

PSL has been in use at Utah for almost a year now and is supporting 
applications in computer algebra (the Reduce system from Rand), VLSI 
design and Computer aided geometric design.

In addition, the Apollo implementation of PSL comes with a large and
easily extensible system interface package.  This provides easy,
interactive access to the resident Apollo window package, graphics
library, process communication system and other operating system
services.

If you have any questions about the system, feel free to contact me
via
        JW-PETERSON@UTAH-20 (arpa) or
        ...!harpo!utah-cs!jwp (uucp)

jw

------------------------------

Date: Sun, 7 Aug 83 12:08:08 CDT
From: Mike.Caplinger <mike.rice@Rand-Relay>
Subject: SUNs

         [Reprinted from the Inof-Graphics discussion list.]

[...]

Lisp is available from UCB (ftp from ucb-vax) for the SUN and many 
simialr 68K-based machines.  We have it up on our SMI SUNs running
4.1c UNIX.  It seems about as good as Franz on the VAX, which from a 
graphics standpoint, is saying nothing at all.

By the way, the SUN graphics library, SUNCore, seems to be an OK 
implementation of the SIG Core standard.  It has some omissions and 
extensions, like every implementation.  I haven't used it extensively 
yet, and it has some problems, but it should get some good graphics 
programs going fairly rapidly.  I haven't yet seen a good graphics
demo for the SUN.  I hope this isn't indicative of what you can
actually do with one.

By the way, "Sun Workstation" is a registered trademark of Sun 
Microsystems, Inc.  You may be able to get a "SUN-like" system 
elsewhere.  I'm not an employee of Sun, I just have to deal with them
a lot...

------------------------------

End of AIList Digest
********************

∂23-Aug-83  1228	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #46
Received: from SRI-AI by SU-AI with TCP/SMTP; 23 Aug 83  12:27:41 PDT
Date: Tuesday, August 23, 1983 10:53AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #46
To: AIList@SRI-AI


AIList Digest            Tuesday, 23 Aug 1983      Volume 1 : Issue 46

Today's Topics:
  Artificial Intelligence - Prejudice & Frames & Turing Test & Evolution,
  Fifth Generation - Top-Down Research Approach
----------------------------------------------------------------------

Date: Thu 18 Aug 83 14:49:13-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Prejudice

The message from (I think .. apologies if wrong) Stan the Leprechaun,
which sets up "rational thought" as the opposite of "right-wingism"
and of "irascibility", disproves the contention in another message
that "bigotry and intelligence are mutually exclusive".  Indeed this
latter message is its own disproof, at least by my definition of
bigotry.  All of which leads me to believe that one or other of them
*was* sent by an AI project Flamer-type program.  Good work.
                                                - Richard

------------------------------

Date: 22 Aug 83 19:45:38-EDT (Mon)
From: The soapbox of Gene Spafford <spaf%gatech@UDel-Relay>
Subject: AI and Human Intelligence

[The following are excerpts from several interchanges with the author.
-- KIL]

Words mean not necessarily what I want them to mean nor what you want
them to mean, but what we all agree that they mean.  My point is that 
we must very possibly consider emotions and ethics in any model we 
care to construct of a "human" intelligence.  The ability to handle a
conversation, as is implied by the Turing test, is not sufficient in 
my eyes to classify something as "intelligent."  That is, what
*exactly* is intelligence?  Is it something measured by an IQ test?
I'm sure you realize that that particular point is a subject of much
conjecture.

If these discussion groups are for discussion of artificial
"intelligence," then I would like to see some thought given as to the
definition of "intelligence."  Is emotion part of intelligence?  Is
superstition part of intelligence?

FYI, I do not believe what I suggested -- that bigots are less than
human.  I made that suggestion to start some comments.  I have gotten
some interesting mail from people who have thought some about the
idea, and from a great many people who decided I should be locked away
for even coming up with the idea.

[...]

That brought to mind a second point -- what is human?  What is
intelligence?  Are the the same thing? (My belief -- no, they aren't.)
I proposed that we might classify "human" as being someone who *at
least tries* to overcome irrational prejudices and bigotry.  More than
ever we need such qualitites as open-mindedness and compassion, as
individuals and as a society.  Can those qualities be programmed into
an AI system?  [...]

My original submission to Usenet was intended to be a somewhat 
sarcastic remark about the nonsense that was going on in a few of the
newsgroups.  Responses to me via mail indicate that at least a few
people saw through to some deeper, more interesting questions.  For
those people who immediately jumped on my case for making the
suggestion, not only did you miss the point -- you *are* the point.

--
  The soapbox of Gene Spafford
  CSNet:  Spaf @ GATech ARPA:  Spaf.GATech @ UDel-Relay
  uucp: ...!{sb1,allegra,ut-ngp}!gatech!spaf
        ...!duke!mcnc!msdc!gatech!spaf

------------------------------

Date: 18 Aug 83 13:40:03-PDT (Thu)
From: decvax!linus!vaxine!wjh12!brh @ Ucb-Vax
Subject: Re: AI Projects on the Net
Article-I.D.: wjh12.299

        I realize this article was a while ago, but I'm just catching
up with my news reading, after vacation.  Bear with me.

        I wonder why folks think it would be so easy for an AI program
to "change it's thought processes" in ways we humans can't.  I submit
that (whether it's an expert system, experiment in KR or what) maybe
the suggestion to 'not think about zebras' would have a similiar
effect on an AI proj. as on a human.  After all, it IS going to have
to decipher exactly what you meant by the suggestion.  On the other
hand, might it not be easier for one of you humans .... we, I mean ...
to consciously think of something else, and 'put it out of your
mind'??

        Still an open question in my mind...  (Now, let's hope this
point isn't already in an article I haven't read...)

                        Brian Holt
                        wjh!brh

------------------------------

Date: Friday, 19 Aug 1983 09:39-PDT
From: turner@rand-unix
Subject: Prejudice and Frames, Turing Test


  I don't think prejudice is a by-product of Minsky-like frames.
Prejudice is simply one way to be misinformed about the world.  In
people, we also connect prejudism with the inability to correct
incorrect information in light of experiences which prove it be wrong.

  Nothing in Minsky frames as opposed to any other theory is a
necessary condition for this.  In any understanding situation, the
thinker must call on background information, regardless of how that is
best represented.  If this background information is incorrect and not
corrected in light of new information, then we may have prejudism.

  Of course, this is a subtle line.  A scientist doesn't change his
theories just because a fact wanders by that seems to contradict his
theories.  If he is wise, he waits until a body of irrefutable
evidence builds up.  Is he prejudiced towards his current theories?
Yes, I'd say so, but in this case it is a useful prejudism.

  So prejudism is really related to the algorithm for modifying known 
information in light of new information.  An algorithm that resists
change too strongly results in prejudism.  The opposite extreme -- an
algorithm that changes too easily -- results in fadism, blowing the
way the wind blows and so on.

                        -----------

  Stan's point in I:42 about Zeno's paradox is interesting.  Perhaps
the mind cast forced upon the AI community by Alan Turing is wrong.
Is Turing's Test a valid test for Artificial Intelligence?

  Clearly not.  It is a test of Human Mimicry Ability.  It is the
assumption that the ability to mimic a human requires intelligence.
This has been shown in the past not to be entirely true; ELIZA is an
example of a program that clearly has no intelligence and yet mimics a
human in a limited domain fairly well.

  A common theme in science fiction is "Alien Intelligence".  That is,
the sf writer basis his story on the idea:  "What if alien
intelligence wasn't like human intelligence?"  Many interesting
stories have resulted from this basis.  We face a similar situation
here.  We assume that Artificial Intelligence will be detectable by
its resemblance to human intelligence.  We really have little ground
for this belief.

  What we need is a better definition of intelligence, and a test
based on this definition.  In the Turing mind set, the definition of
intelligence is "acts like a human being" and that is clearly
insufficient.  The Turing test also leads one to think erroneously
that intelligence is a property with two states (intelligent and
non-intelligent) when even amongst humans there is a wide variance in
the level of intelligence.

  My initial feeling is to relate intelligence to the ability to
achieve goals in a given environment.  The more intelligent man today
is the one who gets what he wants; in short, the more you achieve your
goals, the more intelligent you are.  This means that a person may be
more intelligent in one area of life than in another.  He is, for
instance, a great businessman but a poor father.  This is no surprise.
We all recognize that people have different levels of competence in
different areas.

  Of course, this defintion has problems.  If your goal is to lift
great weights, then your intelligence may be dependent on your
physical build.  That doesn't seem right.  Is a chess program more
intelligent when it runs on a faster machine?

  In the sense of this definition we already have many "intelligent"
programs in limited domains.  For instance, in the domain of
electronic mail handling, there are many very intelligent entities.
In the domain of human life, no electronic entities.  In the domain of
human politics, no human entities (*ha*ha*).

  I'm sure it is nothing new to say that we should not worry about the
Turing test and instead worry about more practical and functional
problems in the field of AI.  It does seem, however, that the Turing
Test is a limited and perhaps blinding outlook onto the AI field.


                                        Scott Turner
                                        turner@randvax

------------------------------

Date: 21 Aug 83 13:01:46-PDT (Sun)
From: harpo!eagle!mhuxt!mhuxi!mhuxa!ulysses!smb @ Ucb-Vax
Subject: Hofstadter
Article-I.D.: ulysses.560

Douglas Hofstadter is the subject of today's N.Y. Times Magazine cover
story.  The article is worth reading, though not, of course,
particularly deep technically.  Among the points made:  that
Hofstadter is not held in high regard by many AI workers, because they
regard him as a popularizer without any results to back up his
theories.

------------------------------

Date: Tue, 23 Aug 83 10:35 PDT
From: "Glasser Alan"@LLL-MFE.ARPA
Subject: Program Genesis

After reading in the New York Times Sunday Magazine of August 21 about
Douglas Hofstadter's latest idea on artificial intelligence arising
from the interplay of lower levels, I was inspired to carry his
suggestion to the logical limit.  I wrote the following item partly in
jest, but the idea may have some merit, at least to stimulate
discussion.  It was also inspired by Stanislaw Lem's story "Non
Serviam".

------------------------------------------------------------------------


                            PROGRAM GENESIS

                A COMPUTER MODEL OF THE PRIMORDIAL SOUP


     The purpose of this program is to model the primordial soup that 
existed in the earth's oceans during the period when life first
formed.  The program sets up a workspace (the ocean) in which storage
space in memory and CPU time (resources) are available to
self-replicating mod- ules of memory organization (organisms).
Organisms are sections of code and data which, when run, cause copies
of themselves to be written into other regions of the workspace and
then run.  Overproduction of species, competition for scarce
resources, and occasional copying errors, either accidental or
deliberately introduced, create all the conditions neces- sary for the
onset of evolutionary processes.  A diagnostic package pro- vides an
ongoing picture of the evolving state of the system.  The goal of the
project is to monitor the evolutionary process and see what this might
teach us about the nature of evolution.  A possible long-range 
application is a novel method for producing artificial intelligence.
The novelty is, of course, not complete, since it has been done at
least once before.

------------------------------

Date: 18 Aug 83 11:16:24-PDT (Thu)
From: decvax!linus!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: Japanese 5th Generation Effort
Article-I.D.: dciem.293

There seems to be an analogy between the 5th generation project and 
the ARPA-SUR project on automatic speech understanding of a decade
ago.  Both are top-down, initiated with a great deal of hope, and
dependent on solving some "nitty-gritty problems" at the bottom. The
result of the ARPA-SUR project was at first to slow down research in
ASR (automatic speech recognition) because a lot of people got scared
off by finding how hard the problem really is. But it did, as Robert
Amsler suggests the 5th generation project will, show just what
"nitty-gritty problems" are important. It provided a great step
forward in speech recognition, not only for those who continued to
work on projects initiated by ARPA-SUR, but also for those who have
come afterward. I doubt we would now be where we are in ASR if it had
not been for that apparently failed project ten years ago.
(Parenthetically, notice that a lot of the subsequent advances in ASR
have been due to the Japanese, and that European/American researchers
freely use those advances.)

Martin Taylor

------------------------------

End of AIList Digest
********************

∂24-Aug-83  1206	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #47
Received: from SRI-AI by SU-AI with TCP/SMTP; 24 Aug 83  12:05:26 PDT
Date: Wednesday, August 24, 1983 10:34AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #47
To: AIList@SRI-AI


AIList Digest           Wednesday, 24 Aug 1983     Volume 1 : Issue 47

Today's Topics:
  Request - AAAI-83 Registration,
  Logic Programming - PARLOG & PROLOG & LISP Prologs
----------------------------------------------------------------------

Date: 22 Aug 83 16:50:55-PDT (Mon)
From: harpo!eagle!allegra!jdd @ Ucb-Vax
Subject: AAAI-83 Registration
Article-I.D.: allegra.1777

Help!  I put off registering for AAAI-83 until too late, and now I
hear that it's overbooked!  (I heard 7000 would-be registrants and
1500 places, or some such.)  If you're registered but find you can't
attend, please let me know, or if you have any other suggestions, feel
free.

Cheers, John ("Something Wrong With My Planning Heuristics")
DeTreville Bell Labs, Murray Hill

------------------------------

Date: 23 Aug 83  1337 PDT
From: Diana Hall <DFH@SU-AI>
Subject: PARLOG

                 [Reprinted from the SCORE BBoard.]

Parlog Seminar

Keith Clark will give a seminar on Parlog Thursday, Sept. 1 at 3 p.m
in Room 252 MJH.



              PARLOG: A PARALLEL LOGIC PROGRAMMING LANGUAGE

                              Keith L. Clark

ABSTRACT

        PARLOG is a logic programming language in the sense that
nearly every definition and query can be read as a sentence of
predicate logic.  It differs from PROLOG in incorporating parallel
modes of evaluation.  For reasons of efficient implementation, it
distinguishes and separates and-parallel and or-parallel evaluation.
        PARLOG relations are divided into two types:  and-relations
and or-relations.  A sequence of and-relation calls can be evaluated
in parallel with shared variables acting as communication channels.
Only one solution to each call is computed.
        A sequence of or-relation calls is evaluated sequentially but
all the solutions are found by a parallel exploration of the different
evaluation paths.  A set constructor provides the main interface
between and-relations and or-relations.  This wraps up all the
solutions to a sequence of or-relation calls in a list.  The solution
list can be concurrently consumed by an and-relation call.
        The and-parallel definitions of relations that will only be
used in a single functional mode can be given using conditional
equations.  This gives PARLOG the syntactic convenience of functional
expressions when non-determinism is not required.  Functions can be
invoked eagerly or lazily; the eager evaluation of nested function
calls corresponds to and-parallel evaluation of conjoined relation
calls.
        This paper is a tutorial introduction and semi-formal
definition of PARLOG.  It assumes familiarity with the general
concepts of logic programming.

------------------------------

Date: Thu 18 Aug 83 20:00:36-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: There are Prologs and Prologs ...

In the July issue of SIGART an article by Richard Wallace describes 
PiL, yet another Prolog in Lisp. The author claims that his 
interpreter shows that "it is easy to extend Lisp to do what Prolog 
does."

It is a useful pedagogical exercise for Lisp users interested in logic
programming to look at a simple, clean implementation of a subset of 
Prolog in Lisp. A particularly illuminating implementation and 
discussion is given in "Structure and Implementation of Computer 
Programs", a set of MIT lecture notes by Abelson and Sussman.

However, such simple interpreters (even the Abelson and Sussman one 
which is far better than PiL) are not a sufficient basis for the claim
that "it is easy extend Lisp to do what Prolog does." What Prolog 
"does" is not just to make certain deductions in a certain order, but 
also MAKE THEM VERY FAST. Unfortunately, ALL Prologs in Lisp I know of
fail in this crucial aspect (by factors between 30 and 1000).

Why is speed such a crucial aspect of Prolog (or of Lisp, for that 
matter)? First, because the development of complex experimental 
programs requires MANY, MANY experiments, which just could not be done
if the systems were, say, 100 times slower than they are. Second, 
because a Prolog (Lisp) system needs to be written mostly in Prolog 
(Lisp) to support the extensibility that is a central aspect of modern
interactive computing environments.

The following paraphrase of Wallace's claim shows its absurdity: "[LiA
(Lisp in APL) shows] that is easy to extend APL to do what Lisp does."
Really? All of what Maclisp does? All of what ZetaLisp does?

Lisp and Prolog are different if related languages. Both have their 
supporters. Both have strengths and (serious) weaknesses. Both can be 
implemented with comparable efficiency. It is educational to to look 
both at (sub)Prologs in Lisp and (sub)Lisps in Prolog. Let's not claim
discoveries of philosopher's stones.

Fernando Pereira
AI Center
SRI International

------------------------------

Date: Wed, 17 Aug 1983  10:20 EDT
From: Ken%MIT-OZ@MIT-MC
Subject: FOOLOG Prolog

                 [Reprinted from the PROLOG Digest.]

Here is a small Prolog ( FOOLOG = First Order Oriented LOGic )
written in Maclisp. It includes the evaluable predicates CALL,
CUT, and BAGOF. I will probably permanently damage my reputation
as a MacLisp programmer by showing it, but as an attempt to cut
the hedge, I can say that I wanted to see how small one could
make a Prolog while maintaining efficiency ( approx 2 pages; 75%
of the speed of the Dec-10 Prolog interpreter ).  It is actually
possible to squeeze Prolog into 16 lines.  If you are interested
in that one and in FOOLOG, I have a ( very ) brief report describing
them that I can send you.  Also, I'm glad to answer any questions
about FOOLOG. For me, the best is if you send messages by Snail Mail,
since I do not have a net connection.  If that is uncomfortable, you
can also send messages via Ken Kahn, who forwards them.

My address is:

Martin Nilsson
UPMAIL
Computing Science Department
Box 2059
S-750 02 UPPSALA, Sweden


---------- Here is a FOOLOG sample run:

(load 'foolog)          ; Lower case is user type-in

; Loading DEFMAX 9844442.
(progn (defpred member  ; Definition of MEMBER predicate
         ((member ?x (?x . ?l)))
         ((member ?x (?y . ?l)) (member ?x ?l)))
       (defpred cannot-prove    ; and CANNOT-PROVE predicate
         ((cannot-prove ?goal) (call ?goal) (cut) (nil))
         ((cannot-prove ?goal)))
       'ok)
OK
(prove (member ?elem (1 2 3)) ; Find elements of the list
       (writeln (?elem is an element))))
(1. IS AN ELEMENT)
MORE? t                 ; Find the next solution
(2. IS AN ELEMENT)
MORE? nil               ; This is enough
(TOP)
(prove (cannot-prove (= 1 2)) ; The two cannot-prove cases
MORE? t
NIL
(prove (cannot-prove (= 1 1))
NIL


---------- And here is the source code:

; FOOLOG Interpreter (c) Martin Nilsson  UPMAIL   1983-06-12

(declare (special *inf* *e* *v* *topfun* *n* *fh* *forward*)
         (special *bagof-env* *bagof-list*))

(defmacro defknas (fun args &rest body)
  `(defun ,fun macro (l)
     (cons 'progn (sublis (mapcar 'cons ',args (cdr l))
                          ',body))))

; ---------- Interpreter

(setq *e* nil *fh* nil *n* nil *inf* 0
      *forward* (munkam (logior 16. (logand (maknum 0) -16.))))
(defknas imm (m x) (cxr x m))
(defknas setimm (m x v) (rplacx x m v))
(defknas makrecord (n)
  (loop with r = (makhunk n) and c for i from 1 to (- n 2) do
        (setq c (cons nil nil))
        (setimm r i (rplacd c c)) finally (return r)))

(defknas transfer (x y)
  (setq x (prog1 (imm x 0) (setq y (setimm x 0 y)))))
(defknas allocate nil
  (cond (*fh* (transfer *fh* *n*) (setimm *n* 7 nil))
        ((setq *n* (setimm (makrecord 8) 0 *n*)))))
(defknas deallocate (on)
  (loop until (eq *n* on) do (transfer *n* *fh*)))
(defknas reset (e n) (unbind e) (deallocate n) nil)
(defknas ult (m x)
  (cond ((or (atom x) (null (eq (car x) '/?))) x)
        ((< (cadr x) 7)
         (desetq (m . x) (final (imm m (cadr x)))) x)
        ((loop initially (setq x (cadr x)) until (< x 7) do
               (setq x (- x 6)
                     m (or (imm m 7)
                           (imm (setimm m 7 (allocate)) 7)))
          finally (desetq (m . x) (final (imm m x)))
          (return x)))))
(defknas unbind (oe)
  (loop with x until (eq *e* oe) do
   (setq x (car *e*)) (rplaca x nil) (rplacd x x) (pop *e*)))
(defknas bind (x y n)
  (cond (n (push x *e*) (rplacd x (cons n y)))
        (t (push x *e*) (rplacd x y) (rplaca x *forward*))))
(lap-a-list '((lap final subr) (hrrzi 1 @ 0 (1)) (popj p) nil))
; (defknas final (x) (cdr (memq nil x))) ; equivalent
(defknas catch-cut (v e)
  (and (null (and (eq (car v) 'cut) (eq (cdr v) e))) v)))

(defun prove fexpr (gs)
  (reset nil nil)
  (seek (list (allocate)) (list (car (convq gs nil)))))

(defun seek (e c)
  (loop while (and c (null (car c))) do (pop e) (pop c))
  (cond ((null c) (funcall *topfun*))
        ((atom (car c)) (funcall (car c) e (cdr c)))
        ((loop with rest = (cons (cdar c) (cdr c)) and
          oe = *e* and on = *n* and e1 = (allocate)
          for a in (symeval (caaar c)) do
          (and (unify e1 (cdar a) (car e) (cdaar c))
               (setq inf* (1+ *inf*)
                     *v* (seek (cons e1 e)
                               (cons (cdr a) rest)))
               (return (catch-cut *v* e1)))
          (unbind oe)
          finally (deallocate on)))))

(defun unify (m x n y)
  (loop do
    (cond ((and (eq (ult m x) (ult n y)) (eq m n)) (return t))
          ((null m) (return (bind x y n)))
          ((null n) (return (bind y x m)))
          ((or (atom x) (atom y)) (return (equal x y)))
          ((null (unify m (pop x) n (pop y))) (return nil)))))

; ---------- Evaluable Predicates

(defun inst (m x)
  (cond ((let ((y x))
           (or (atom (ult m x)) (and (null m) (setq x y)))) x)
        ((cons (inst m (car x)) (inst m (cdr x))))))

(defun lisp (e c)
  (let ((n (pop e)) (oe *e*) (on *n*))
    (or (and (unify n '(? 2) (allocate) (eval (inst n '(? 1))))
             (seek e c))
        (reset oe on))))

(defun cut (e c)
  (let ((on (cadr e))) (or (seek (cdr e) c) (cons 'cut on))))

(defun call (e c)
  (let ((m (car e)) (x '(? 1)))
    (seek e (cons (list (cons (ult m x) '(? 2))) c))))

(defun bagof-topfun nil
  (push (inst *bagof-env* '(? 1)) *bagof-list*) nil)

(defun bagof (e c)
  (let* ((oe *e*) (on *n*) (*bagof-list* nil)
                  (*bagof-env* (car e)))
    (let ((*topfun* 'bagof-topfun)) (seek e '(((call (? 2))))))
    (or (and (unify (pop e) '(? 3) (allocate) *bagof-list*)
             (seek e c))
        (reset oe on))))

; ---------- Utilities

(defun timer fexpr (x)
  (let* ((*rset nil) (*inf* 0) (x (list (car (convq x nil))))
         (t1 (prog2 (gc) (runtime) (reset nil nil)
                    (seek (list (allocate)) x)))
         (t1 (- (runtime) t1)))
    (list (// (* *inf* 1000000.) t1) 'LIPS (// t1 1000.)
          'MS *inf* 'INF)))

(eval-when (compile eval load)
  (defun convq (t0 l0)
    (cond ((pairp t0) (let* (((t1 . l1) (convq (car t0) l0))
                             ((t2 . l2) (convq (cdr t0) l1)))
                        (cons (cons t1 t2) l2)))
          ((null (and (symbolp t0) (eq (getchar t0 1) '/?)))
           (cons t0 l0))
          ((memq t0 l0)
           (cons (cons '/? (cons (length (memq t0 l0))
                                 t0)) l0))
          ((convq t0 (cons t0 l0))))))

(defmacro defpred (pred &rest body)
  `(setq ,pred ',(loop for clause in body
                       collect (car (convq clause nil)))))

(defpred true    ((true)))
(defpred =       ((= ?x ?x)))
(defpred lisp    ((lisp ?x ?y) . lisp))
(defpred cut     ((cut) . cut))
(defpred call    ((call (?x . ?y)) . call))
(defpred bagof   ((bagof ?x ?y ?z) . bagof))
(defpred writeln
  ((writeln ?x) (lisp (progn (princ '?x) (terpri)) ?y)))

(setq *topfun*
      '(lambda nil (princ "MORE? ")
               (and (null (read)) '(top))))

------------------------------

Date: Wed, 17 Aug 1983  10:14 EDT
From: Ken%MIT-OZ@MIT-MC
Subject: A Pure Prolog Written In Pure Lisp

                 [Reprinted from the PROLOG Digest.]

;; The following is a tiny Prolog interpreter in MacLisp
;; written by Ken Kahn.
;; It was inspired by other tiny Lisp-based Prologs of
;; Par Emanuelson and Martin Nilsson
;; There are no side-effects in anywhere in the implementation
;; Though it is very slow of course.

(defun Prolog (database) ;; a top-level loop for Prolog
  (prove (list (rename-variables (read) '(0)))
         ;; read a goal to prove
         '((bottom-of-environment)) database 1)
  (prolog database))

(defun prove (list-of-goals environment database level)
  ;; proves the conjunction of the list-of-goals
  ;; in the current environment
  (cond ((null list-of-goals)
         ;; succeeded since there are no goals
         (print-bindings environment environment)
          ;; the user answers "y" or "n" to "More?"
         (not (y-or-n-p "More?")))
        (t (try-each database database
                     (rest list-of-goals) (first list-of-goals)
                     environment level))))

(defun try-each (database-left database goals-left goal
                               environment level)
 (cond ((null database-left)
        ()) ;; fail since nothing left in database
       (t (let ((assertion
                 ;; level is used to uniquely rename variables
                 (rename-variables (first database-left)
                                   (list level))))
            (let ((new-environment
                   (unify goal (first assertion) environment)))
              (cond ((null new-environment) ;; failed to unify
                     (try-each (rest database-left)
                               database
                               goals-left
                               goal
                               environment level))
                    ((prove (append (rest assertion) goals-left)
                            new-environment
                            database
                            (add1 level)))
                    (t (try-each (rest database-left)
                                 database
                                 goals-left
                                 goal
                                 environment
                                 level))))))))

(defun unify (x y environment)
  (let ((x (value x environment))
        (y (value y environment)))
    (cond ((variable-p x) (cons (list x y) environment))
          ((variable-p y) (cons (list y x) environment))
          ((or (atom x) (atom y))
           (and (equal x y) environment))
          (t (let ((new-environment
                    (unify (first x) (first y) environment)))
               (and new-environment
                    (unify (rest x) (rest y)
                           new-environment)))))))

(defun value (x environment)
  (cond ((variable-p x)
         (let ((binding (assoc x environment)))
           (cond ((null binding) x)
                 (t (value (second binding) environment)))))
        (t x)))

(defun variable-p (x) ;; a variable is a list beginning with "?"
  (and (listp x) (eq (first x) '?)))

(defun rename-variables (term list-of-level)
  (cond ((variable-p term) (append term list-of-level))
        ((atom term) term)
        (t (cons (rename-variables (first term)
                                   list-of-level)
                 (rename-variables (rest term)
                                   list-of-level)))))

(defun print-bindings (environment-left environment)
  (cond ((rest environment-left)
         (cond ((zerop
                 (third (first (first environment-left))))
                (print
                 (second (first (first environment-left))))
                (princ " = ")
                (prin1 (value (first (first environment-left))
                              environment))))
         (print-bindings (rest environment-left) environment))))

;; a sample database:
(setq db '(((father jack ken))
           ((father jack karen))
           ((grandparent (? grandparent) (? grandchild))
            (parent (? grandparent) (? parent))
            (parent (? parent) (? grandchild)))
           ((mother el ken))
           ((mother cele jack))
           ((parent (? parent) (? child))
            (mother (? parent) (? child)))
           ((parent (? parent) (? child))
            (father (? parent) (? child)))))

;; the following are utilities

(defun first (x) (car x))
(defun rest (x) (cdr x))
(defun second (x) (cadr x))
(defun third (x) (caddr x))

------------------------------

End of AIList Digest
********************

∂25-Aug-83  1057	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #48
Received: from SRI-AI by SU-AI with TCP/SMTP; 25 Aug 83  10:56:54 PDT
Date: Thursday, August 25, 1983 9:14AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #48
To: AIList@SRI-AI


AIList Digest           Thursday, 25 Aug 1983      Volume 1 : Issue 48

Today's Topics:
  AI Literature - Journals & COMTEX & Online Reports,
  AI Architecture - The Connection Machine,
  Programming Languages - Scheme and Lisp Availability,
  Artificial Intelligence - Turing Test & Hofstadter Article
----------------------------------------------------------------------

Date: 20 Aug 1983 0011-MDT
From: Jed Krohnfeldt <KROHNFELDT@UTAH-20>
Subject: Re: AI Journals

I would add one more journal to the list:

Cognition and Brain Theory
	Lawrence Erlbaum Associates, Inc.
	365 Broadway,
	Hillsdale, New Jersey 07642
	$18 Individual $50 Instititional
	Quarterly
	Basic cognition, proposed models and discussion of
	consciousness and mental process, epistemology - from frames to
	neurons, as related to human cognitive processes. A "fringe"
	publication for AI topics, and a good forum for issues in cognitive
	science/psychology.

Also, I notice that the institutional rate was quoted for several of 
the journals cited.  Many of these journals can be had for less if you
convince them that you are a lone reader (individual) and/or a 
student.


[Noninstitutional members of AAAI can get the Artificial Intelligence
Journal for $50.  See the last page of the fall AI Magazine.

Another journal for which I have an ad is

New Generation Computing
	Springer-Verlag New York Inc.
	Journal Fulfillment Dept.
	44 Hartz Way
	Secaucus, NJ  07094
	A quarterly English-language journal devoted to international
	research on the fifth generation computer.  [It seems to be
	very strong on hardware and logic programming.]
	1983 - 2 issues - $52. (Sample copy free.)
	1984 - 4 issues - $104.

-- KIL]

------------------------------

Date: Sun 21 Aug 83 18:06:52-PDT
From: Robert Amsler <AMSLER@SRI-AI>
Subject: Journal listings

Computing Reviews, Nov. 1982, lists all the periodicals they receive 
and their addresses. Handy list of a lot of CS journals.

------------------------------

Date: Tue, 23 Aug 83 11:05 EDT
From: Tim Finin <Tim%UPenn@UDel-Relay>
Subject: COMTEX and getting AI technical reports


There WAS a company which offered a service in which subscribers would
get copies of recent technical reports on all areas of AI research -
COMTEX.  The reports were to be drawn from universities and
institutions doing AI research.  The initial offering in the series
contained old Stanford and MIT memos.  The series was intended to
provide very timely access to current reaseach in the participating
institution. COMTEX has decided to discontinue the AI series, however.
Perhaps if they perceive an increased demand for this series they will
reactivate it.

Tim

[There is a half-page Comtex ad for the MIT and Stanford memoranda in
the Fall issue of AI Magazine, p. 79.  -- KIL]

------------------------------

Date: 19 Aug 83 19:21:34 PDT (Friday)
From: Hamilton.ES@PARC-MAXC.ARPA
Subject: On-line tech reports?

I raised this issue on Human-nets nearly two years ago and didn't seem
to get more than a big yawn for a response.

Here's an example of what I had to go through recently:  I saw an 
interesting-looking CMU tech report (Newell, "Intellectual Issues in
the History of AI") listed in SIGART News.  It looked like I could
order it from CMU.  No ARPANET address was listed, so I wrote -- I
even gave them my ARPANET address.  They sent me back a form letter
via US Snail referring me to NTIS.  So then I phoned NTIS.  I talked
to an answering machine and left my US Snail address and the order
number of the tech report.  They sent me back a postcard giving the
price, something like $7.  I sent them back their order form,
including my credit card#.  A week or so later I got back a moderately
legible document, probably reproduced from microfiche, that looks
suspiciously like a Bravo document that's probably on line somewhere,
if I only knew where.  I'm not picking on CMU -- this is a general
problem.

There's GOT to be a better way.  How about: (1) Have a standard 
directory at each major ARPA host, containing at least a catalog with 
abstracts of all recent tech reports, and info on how to order, and 
hopefully full text of at least the most recent and/or popular ones, 
available for FTP, perhaps at off-peak hours only.  (2) Hook NTIS into
ARPANET, so that folks could browse their catalogs and submit orders 
electronically.

RUTGERS used to have an electronic mailing list to which they 
periodically sent updated tech report catalogs, but that's about the 
only activity of this sort that I've seen.

We've got this terrific electronic highway.  Let's make it useful for 
more than mailing around collections of flames, like this one!

--Bruce

------------------------------

Date: 23 August 1983 00:22 EDT
From: Alan Bawden <ALAN @ MIT-MC>
Subject: The Connection Machine

    Date: Thu 18 Aug 83 13:46:13-PDT
    From: David Rogers <DRogers at SUMEX-AIM.ARPA>

    The closest hardware I am aware of is called the Connection
    Machine, and is begin developed at MIT by Alan Bawden, Dave
    Christman, and Danny Hillis ...

also Tom Knight, David Chapman, Brewster Kahle, Carl Feynman, Cliff
Lasser, and Jon Taft.  Danny Hillis provided the original ideas, his
is the name to remember.

    The project involves building a model with about 2↑10 processors.

The prototype Connection Machine was designed to have 2↑20 processors,
although 2↑10 might be a good size to actually build to test the idea.

One way to arrive at a superficial understanding of the Connection
Machine would be to imagine augmenting a NETL machine with the ability
to pass addresses (or "pointers") as well as simple markers.  This
permits the Connection Machine to perform even more complex pattern
matching on semantic-network-like databases.  The detection of any
kind of cycle (find all people who are employed by their own fathers),
is the canonical example of something this extension allows.

But thats only one way to program a Connection Machine.  In fact, the
thing seems to be a rather general parallel processor.

MIT AI Memo #646, "The Connection Machine" by Danny Hillis, is still a
perfectly good reference for the general principles behind the
Connection Machine, despite the fact that the hardware design has
changed a bit since it was written.  (The memo is currently being
revised.)

------------------------------

Date: 22 August 1983 18:20 EDT
From: Hal Abelson <HAL @ MIT-MC>
Subject: Lisps on 68000


At MIT we are working on a version of Scheme (a lexically scoped 
dialect of Lisp) that runs on the HP 9836 computer, which is a 68000 
machine.  Starting 3 weeks from now, 350 MIT students will be using 
this system on a full-time basis.

The implementation consists of a kernel written in 68000 assembler, 
with most of the system written in Scheme and compiled using a quick 
and dirty compiler, which is also written in Scheme.  The 
implementation sits inside of HP's UCSD-Pascal-clone operating system.
For an editor, we use NMODE, which is a version of EMACS written in 
Portable Standard Lisp. Thus our machines run, at present, with both 
Scheme and PSL resident, and consequently require 4 megabytes of main 
memory.  This will change when we get another editor, which will be at
least a few months.

The current system gives good performance for coursework, and is 
optimized to provide fast interpreted code, as well as a good 
debugging environment for student use.

Work will begin on a serious compiler as soon as the start-of-semester
panic is over.  There will also be a compatible version for the Vax.

Distribution policy has not yet been decided upon, but most likely we 
will give the system away (not the PSL part, which is not ours to 
give) to anyone who wants it, provided that people who get it agree to
return all improvements to MIT.

Please no requests for a few months, though, since we are still making
changes in the design and documentation.  Availibility will be 
annouced on this mailing list..

------------------------------

Date: 23 Aug 83 16:36:26-PDT (Tue)
From: harpo!seismo!rlgvax!cvl!umcp-cs!mark @ Ucb-Vax
Subject: Franz lisp on a Sun Workstation.
Article-I.D.: umcp-cs.2096

So what is the true story?  What person says it is almost as fast as
a single user 780, another says it is an incredible hog.  These can't
both be right, as a Vax-780 IS at least as fast as a Lispmachine (not
counting the bitmapped screen).  It sounded to me like the person who
said it was fast had actually used it, but the person who said it was
slow was just working from general knowledge.  So maybe it is fast.
Wouldn't that be nice.
--
spoken: mark weiser
UUCP:   {seismo,allegra,brl-bmd}!umcp-cs!mark
CSNet:  mark@umcp-cs
ARPA:   mark.umcp-cs@UDel-Relay

------------------------------

Date: Tue 23 Aug 83 14:43:50-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: in defense of Turing

        Scott Turner (AIList V1 #46) has some interesting points about
intelligence, but I felt compelled to defend Turing in his absence.  
The Turing article in Mind (must reading for any AIer) makes it clear
that the test is not proposed to *define* an intelligent system, or
even to *recognize* one; the claim is merely that a system which *can*
pass the test has intelligence. Perhaps this is a subtle difference, 
but it's as important as the difference between "iff" and "if" in
math.

        Scott bemoans the Turing test as testing for "Human Mimicing 
Ability", and suggests that ELIZA has shown this to be possible 
without intelligence. ELIZA has fooled some people, though I would not
say it has passed anything remotely like the Turing test.  Mimicing
language is a far cry from mimicing intelligence.

        In any case, it may be even more difficult to detect 
intelligence without doing a comparison to human intellect; after all,
we're the only intelligent systems we know of...

Regards,

David

------------------------------

Date: Tue 23 Aug 83 19:23:00-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Hofstadter article

        Alas, after reading the article about Hofstadter in the
NYTimes, I realized that AI workers can be at least as closeminded as
other scientists have shown. At its bottom level, it seemed that DH's
basic feeling (that we have a long way to go before creating real
intelligence) is embarrassingly obvious. In the long run, the false
hopes that expectations of quick results give rise to can only hurt
the acceptance of AI in people's minds.

        (By the way, I thought the article was very well written, and
would encourage people to look it up. The report is spiced with
opinions from AI workers such as Alan Newell and Marvin Minsky, and it
was enjoyable to hear their candid comments about Hofstadter and AI in
general. Quite a step above the usual articles designed for general
consumption about AI...)

David R.

------------------------------

End of AIList Digest
********************

∂29-Aug-83  1311	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #49
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Aug 83  13:09:16 PDT
Date: Monday, August 29, 1983 11:08AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #49
To: AIList@SRI-AI


AIList Digest            Monday, 29 Aug 1983       Volume 1 : Issue 49

Today's Topics:
  Conferences - AAAI-83 Registration,
  Bindings - Rog-O-Matic & Mike Mauldin,
  Artificial Languages - Loglan,
  Knowledge Representation & Self-Consciousness - Textnet,
  AI Publication - Corporate Constraints,
  Lisp Availability - PSL on 68000's,
  Automatic Translation - Lisp-to-Lisp & Natural Language
----------------------------------------------------------------------

Date: 23 Aug 83 11:04:22-PDT (Tue)
From: decvax!linus!philabs!seismo!rlgvax!cvl!umcp-cs!arnold@Ucb-Vax
Subject: Re: AAAI-83 Registration
Article-I.D.: umcp-cs.2093


        If there will be over 7000 people attending AAAi←83,
        then there will almost be as many people as will
        attend the World Sci. Fic. Convention.

        I worked registration for AAAI-83 on Aug 22 (Monday).
        There were about 700 spaces available, along with about
        1700 people who pre-registered.

        [...]

                --- A Volunteer

------------------------------

Date: 26 Aug 83 2348 EDT
From: Rudy.Nedved@CMU-CS-A
Subject: Rog-O-Matic & Mike Mauldin

Apparently people want something related to Rog-O-Matic and are 
sending requests to "Maudlin". If you notice very closely that is not
how his name is spelled. People are transposing the "L" and the "D".
Hopefully this message will help the many people who are trying to
send Mike mail.

If you still can't get his mailing address right, try
"mlm@CMU-CS-CAD".

-Rudy
A CMU Postmaster

------------------------------

Date: 28 August 1983 06:36 EDT
From: Jerry E. Pournelle <POURNE @ MIT-MC>
Subject: Loglan

I've been interested in LOGLANS since Heinlein's GULF which was in
part devoted to it.  Alas, nothing seems to happen that I can use; is
the institute about to publish new materials?  Is there anything in
machine-readable form using Loglans?  Information appreciated.  JEP

------------------------------

Date: 25-Aug-83 10:03 PDT
From: Kirk Kelley  <KIRK.TYM@OFFICE-2>
Subject: Re: Textnet

Randy Trigg mentioned his "Textnet" thesis project a few issues back
that combines hypertext and NLS/Augment structures.  He makes a strong
statement about distributed Textnet on worldnet:

   There can be no mad dictator in such an information network.

I am interested in building a testing ground for statements such as
that.  It would contain a model that would simulate the global effects
of technologies such as publishing on-line.  Here is what may be of
interest to the AI community.  The simulation would be a form of
"augmented global self-consciousness" in that it models its own
viability as a service published on-line via worldnet.  If you have
heard of any similar project or might be interested in collaborating
on this one, let me know.

 -- kirk

------------------------------

Date: 25 Aug 83 15:47:19-PDT (Thu)
From: decvax!microsoft!uw-beaver!ssc-vax!tjj @ Ucb-Vax
Subject: Re: Language Translation
Article-I.D.: ssc-vax.475

OK, you turned your flame-thrower on, now prepare for mine!  You want
to know why things don't get published -- take a look at your address
and then at mine.  You live (I hope I'm not talking to an AI Project)
in the academic community; believe it or not there are those of us
who work in something euphemistically refered to as industry where
the rule is not publish or perish, the rule is keep quiet and you are
less likely to get your backside seared!  Come on out into the 'real'
world where technical papers must be reviewed by managers that don't
know how to spell AI, let alone understand what language translation
is all about.  Then watch as two of them get into a moebius argument,
one saying that there is nothing classified in the paper but there is
proprietary information, while the other says no proprietary but it
definitely is classified!  All the while this is going on the
deadline for submission to three conferences passes by like the
perennial river flowing to the sea.  I know reviews are not unheard
of in academia, and that professors do sometimes get into arguments,
but I've no doubt that they would be more generally favorable to
publication than managers who are worried about the next
stockholder's meeting.

It ain't all that bad, but at least you seem to need a wider
perspective.  Perhaps the results haven't been published; perhaps the
claims appear somewhat tentative; but the testing has been critical,
and the only thing left is primarily a matter of drudgery, not
innovative research.  I am convinced that we may certainly find a new
and challenging problem awaiting us once that has been done, but at
least we are not sitting around for years on end trying to paste
together a grammar for a context
sensitive language!!

Ted Jardine
TJ (with Amazing Grace) The Piper
ssc-vax!tjj

------------------------------

Date: 24 Aug 83 19:47:17-PDT (Wed)
From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
Subject: Re: Lisps on 68000's - (nf)
Article-I.D.: uiucdcs.2626

I played with a version of PSL on a HP 9845 for several hours one
day.  The environment was just like running FranzLisp under Emacs in
"electric-lisp" mode. (However, the editor is written in PSL itself,
so it is potentially much more powerful than the emacs on our VAX,
with its screwy c/mock-lisp implementation.) The language is in the
style of Maclisp (rather than INTERLISP) and uses standard scoping
(rather than the lexical scoping of T). The machine has 512 by 512
graphics and a 2.5 dimensional window system, but neither are as
fully integrated into the programming environment as on a Xerox
Dolphin. Although I have no detailed benchmarks, I did port a
context-free chart parser to it. The interpreter speed was not
impressive, but was comparable with interpreted Franz on a VAX.
However, the speed of compiled code was very impressive. The compiler
is incremental, and built-in to the lisp system (like in INTERLISP),
and caused about a 10-20 times speedup over interpreted code (my
estimate is that both the Franz and INTERLISP-d compilers only net
2-5 times speedup).  As a result, the compiled parser ran much faster
on the 68000 than the same compiled program on a Dolphin.

I think PSL is definitely a superior lisp for the 68000, but I have
no idea whether is will be available for non-HP machines...


Jordan Pollack
University of Illinois
...pur-ee!uiucdcs!uicsl!pollack

------------------------------

Date: 24 Aug 83 16:20:12-PDT (Wed)
From: harpo!gummo!whuxlb!floyd!vax135!cornell!uw-beaver!ssc-vax!sts@Ucb-Vax
Subject: Re: Lisp-to-Lisp translation
Article-I.D.: ssc-vax.468

These problems just go to show what AI people have known for years 
(ever since the first great bust of machine translation) - ya can't 
translate without understanding what yer translating.  Optimizing 
compilers are often impressive encodings of expert coders' knowledge, 
and they are for very simple languages - not like Interlisp or English

                                        stan the lep hacker
                                        ssc-vax!sts (soon utah-cs)

------------------------------

Date: 24 Aug 83 16:12:59-PDT (Wed)
From: harpo!floyd!vax135!cornell!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: Language Translation
Article-I.D.: ssc-vax.467

You have heard of my parser.  It's a variant on Berkeley's PHRAN, but 
has been improved to handle arbitrarily ambiguous sentences.  I
submitted a paper on it to AAAI-83, but it was rejected (well, I did
write it in about 3 days - wasn't very good).  A paper will be
appearing at the AIAA Computers in Aerospace conference in October.
The parser is only a *basic* solution - I suppose I should have made
that clearer.  Since it is knowledge-based, it needs **lots** of
knowledge.  Right now we're working on ways to acquire linguistic
knowledge automatically (Selfridge's work is very interesting).  The
knowledge base is woefully small, but we don't anticipate any problems
expanding it (famous last words!).

The parser has just been released for use within Boeing ("just"
meaning two days ago), and it may be a while before it becomes
available elsewhere (sorry).  I can mail details on it though.

As for language analysis being NP-complete, yes you're right.  But are
you sure that humans don't brute-force the process, and that computers
won't have to do the same?

                                        stan the lep hacker
                                        ssc-vax!sts (soon utah-cs)

ps if IBM is using APL, that explains a lot (I'm a former MVS victim)

------------------------------

Date: 24 Aug 83 15:47:11-PDT (Wed)
From: harpo!gummo!whuxlb!floyd!vax135!cornell!uw-beaver!ssc-vax!sts@Ucb-Vax
Subject: Re: So the language analysis problem has been solved?!?
Article-I.D.: ssc-vax.466

Heh-heh.  Thought that'd raise a few hackles (my boss didn't approve 
of the article; oh well.  I tend to be a bit fiery around the edges).

The claim is that we have "basically" solved the problem.  Actually, 
we're not the only ones - the APE-II parser by Pazzani and others from
the Schank school have also done the same thing.  Our parser can
handle arbitrarily ambiguous sentences, generating *all* the possible
meanings, limited only by the size of its knowledge base.  We have the
capability to do any sort of idiom, and mix any number of natural
languages.  Our problems are really concerned with the acquisition of
linguistic knowledge, either by having nonspecialists put it in by
hand (*everyone* is an expert on the native language) or by having the
machine acquire it automatically.  We can mail out some details if
anyone is interested.

One advantage we had is starting from ground zero, so we had very few 
preconceptions about how language analysis ought to be done, and
scanned the literature.  It became apparent that since we were
required to handle free-form input, any kind of grammar would
eventually become less than useful and possibly a hindrance to
analysis.  Dr. Pereira admits as much when he says that grammars only
reflect *some* aspects of language.  Well, that's not good enough.  Us
folks in applied research can't always afford the luxury of theorizing
about the most elegant methods.  We need something that models human
cognition closely enough to make sense to knowledge engineers and to
users.  So I'm sort of in the Schank camp (folks at SRI hate 'em)
although I try to keep my thinking as independent as possible (hard
when each camp is calling the other ones charlatans; I'll post
something on that pernicious behavior eventually).

Parallel production systems I'll save for another article...

                                        stan the leprechaun hacker
                                        ssc-vax!sts (soon utah-cs)

ps I *did* read an article of Dr. Pereira's - couldn't understand the
point.  Sorry.  (perhaps he would be so good as to explain?)

[Which article? -- KIL]

------------------------------

Date: 26 Aug 83 11:19-EST (Fri)
From: Steven Gutfreund <gutfreund%umass-cs@UDel-Relay>
Subject: Musings on AI and intelligence

Spafford's musings on intelligent communications reminded me of an
article I read several years ago by John Thomas (then at T.J. Watson,
now at White Plains, a promotion as IBM sees it).

In the paper he distinguishes between two distinct approaches (or
philosophies) at raising the level man/machine communication.

[Natural langauge recognition is one example of this problem. Here the
machine is trying to "decipher" the user's natural prose to determine
the desired action. Another application are intelligent interfaces
that attempt to decipher user "intentions"]

The Human Approach -

Humans view communication as inherently goal based. When one
communicates with another human being, there is an explicit goal -> to
induce a cognitive state in the OTHER. This cognitive state is usually
some function of the communicators cognitive state. (usually the
identity function, since one wants the OTHER to understand what one is
thinking). In this approach the medium of communication (words, art,
gestulations) are not the items being communicated, they are
abstractions meant to key certain responses in the OTHER to arrive at
the desired goal.

The Mechanistic Approach

According to Thomas this is the approach taken by natural language 
recognition people. Communication is the application of a decrypto
function to the prose the user employed. This approach is inherently
flawed, according to Thomas, since the actually words and prose do not
contain meaning in themselves but are tools for efecting cognitive
change.  Thus, the text of one of Goebell's propaganda speeches can
not be examined in itself to determine what it means, but one needs an
awareness of the cognitive models, metaphors, and prejudices of the
speaker and listeners.  Capturing this sort of real world knowledge
(biases, prejudices, intuitive feelings) is not a stong point of the
AI systems. Yet, the extent that certain words move a person, may be
much more highly dependent on, say his Catholic upbringing than the
words themselves.

If one doubts the above thesis, then I encourage you to read Thomas
Kuhn's book "the Sturcture of Scientific Revolutions" and see how
culture can affect the interpretation of supposedly hard scientific
facts and observations.

Perhaps something that best brings this out was an essay (I believe it
was by Smuyllian) in "The Mind's Eye" (Dennet and Hofstadter). In this
essay a homunculus is set up with the basic tools of one of Schank's
language understanding systems (scripts, text, rules, etc.) He then
goes about the translation of the text from one language to another
applying a set of mechanistic transformation rules. Given that the
homunculus knows nothing of either the source language or the target
language, can you say that it has any understanding of what the script
was about? How does this differ from today's NUR systems?


                                        - Steven Gutfreund
                                          Gutfreund.umass@udel-relay

------------------------------

End of AIList Digest
********************

∂30-Aug-83  1143	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #50
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Aug 83  11:42:56 PDT
Date: Tuesday, August 30, 1983 10:16AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #50
To: AIList@SRI-AI


AIList Digest            Tuesday, 30 Aug 1983      Volume 1 : Issue 50

Today's Topics:
  AI Literature - Bibliography Request,
  Intelligence - Definition & Turing Test & Prejudice & Flamer
----------------------------------------------------------------------

Date: 29 Aug 1983 11:05:14-PDT
From: Susan L Alderson <mccarty@Nosc>
Reply-to: mccarty@Nosc
Subject: Help!


We are trying to locate any and all bibliographies, in electronic
form, of AI and Robotics.  I know that this covers a broad spectrum,
but we would rather have too many things to choose from than none at
all.  Any help or leads on this would be greatly appreciated.

We are particularly interested in:

    AI Techniques
    Vision Analysis
    AI Languages
    Robotics
    AI Applications
    Speech Analysis
    AI Environments
    AI Systems Support
    Cybernetics

This is not a complete list of our interests, but a good portion of
the high spots!

susie (mccarty@nosc-cc)


[Several partial bibilographies have been published in AIList; more
would be most welcome.  Readers able to provide pointers should reply
to AIList as well as to Susan.

Many dissertation and report abstracts have been published in the
SIGART newsletter; online copies may exist.  Individual universities
and corporations also maintain lists of their own publications; CMU,
MIT, Stanford, and SRI are among the major sources in this country.
(Try Navarro@SRI-AI for general AI and CPowers@SRI-AI for robotics
reports.)

One of the fastest ways to compile a bibliography is to copy author's
references from the IJCAI and AAAI conference proceedings.  The AI
Journal and other AI publications are also good.  Beware of straying
too far from your main topics, however.  Rosenfeld's vision and image
processing bibliographies in CVGIP (Computer Vision, Graphics, and
Image Processing) list over 700 articles each year.

-- KIL]

------------------------------

Date: 25 Aug 1983 1448-PDT
From: Jay <JAY@USC-ECLC>
Subject: intelligence is...

  An intelligence must have at least three abilities; To act; To 
perceive, and classify (as one of: better, the same, worse) the
results of its actions, or the environment after the action; and
lastly To change its future actions in light of what it has perceived,
in attempt to maximize "goodness", and avoid "badness".  My views are 
very obviously flavored by behaviorism.

  In defense of objections I hear coming...  To act is necessary for 
intelligence, since it is pointless to call a rock intelligent since 
there seems to be no way to detect it.  To perceive is necessary of 
intelligence since otherwise projectiles, simple chemicals, and other 
things that act following a set of rules, would be classified as 
intelligent.  To change future actions is the most important since a 
toaster could perceive that it was overheating, oxidizing its heating
elements, and thus dying, but would be unable to stop toasting until
it suffered a breakdown.

  In summary (NOT (AND actp percievep evolvep)) -> (NOT intelligent), 
or Action, Perception, and Evolution based upon perception is
necessary for intelligence.  I *believe* that these conditions are
also sufficient for intelligence.

awaiting flames,

j'

PS. Yes, the earth's bio-system IS intelligent.

------------------------------

Date: 25 Aug 83 2:00:58-PDT (Thu)
From: harpo!gummo!whuxlb!pyuxll!ech @ Ucb-Vax
Subject: Re: Prejudice and Frames, Turing Test
Article-I.D.: pyuxll.403

The characterization of prejudice as  an  unwillingness/inability
to  adapt  to  new  (contradictory)  data  is  an  appealing one.
Perhaps this belongs in net.philosophy, but it seems to me that a
requirement  for  becoming a fully functional intelligence (human
or otherwise) is to abandon the search for  compact,  comfortable
"truths"  and  view knowledge as an approximation and learning as
the process of improving those approximations.

There is nothing wrong with compact generalizations: they  reduce
"overhead" in routine situations to manageable levels. It is when
they   are   applied   exclusively   and/or    inflexibly    that
generalizations  yield bigotry and the more amusing conversations
with Eliza et al.

As for the Turing test, I think it may be appropriate to think of
it  as  a "razor" rather than as a serious proposal.  When Turing
proposed the test there was a philosophical argument raging  over
the  definition  of  intelligence,  much  of  which  was outright
mysticism. The famous test cuts the fog nicely: a device  needn't
have  consciousness,  a  soul,  emotions -- pick your own list of
nebulous terms -- in order to  function  "intelligently."  Forget
whether it's "the real thing," it's performance that counts.

I think Turing recognized that, no matter how successful AI  work
was, there would always be those (bigots?) who would rip the back
off the machine and say,  "You  see?  Just  mechanism,  no  soul,
no emotions..." To them, the Turing test replies, "Who cares?"

=Ned=

------------------------------

Date: 25 Aug 83 13:47:38-PDT (Thu)
From: harpo!floyd!vax135!cornell!uw-beaver!uw-june!emma @ Ucb-Vax
Subject: Re: Prejudice and Frames, Turing Test
Article-I.D.: uw-june.549

I don't think I can accept some of the comments being bandied about 
regarding prejudice.  Prejudice, as I understand the term, refers to 
prejudging a person on the basis of class, rather than judging that 
person as an individual.  Class here is used in a wider sense than 
economic.  Examples would be "colored folk got rhythm" or "all them
white saxophonists sound the same to me"-- this latter being a quote
from Miles Davis, by the way.  It is immediately apparent that
prejudice is a natural result of making generalizations and
extrapolating from experience.  This is a natural, and I would suspect
inevitable, result of a knowledge acquisition process which
generalizes.

Bigotry, meanwhile, refers to inflexible prejudice.  Miles has used a 
lot of white saxophonists, as he recognizes that some don't all sound 
the same.  Were he bigoted, rather than prejudiced, he would refuse to
acknowledge that.  The problem lies in determining at what point an 
apparent counterexample should modify a conception.  Do we decide that
gravity doesn't work for airplanes, or that gravity always works but 
something else is going on?  Do we decide that a particular white sax 
man is good, or that he's got a John Coltrane tape in his pocket?

In general, I would say that some people out there are getting awfully
self-righteous regarding a phenomenon that ought to be studied as a 
result of our knowledge acquisition process rather than used to 
classify people as sub-human.

-Joe P.

------------------------------

Date: 25 Aug 83 11:53:10-PDT (Thu)
From: decvax!linus!utzoo!utcsrgv!utcsstat!laura@Ucb-Vax
Subject: AI and Human Intelligence [& Editorial Comment]

Goodness, I stopped reading net.ai a while ago, but had an ai problem
to submit and decided to read this in case the question had already
been asked and answered. News here only lasts for 2 weeks, but things
have changed...

At any rate, you are all discussing here what I am discussing in mail 
to AI types (none of whom mentioned that this was going on here, the 
cretins! ;-) ). I am discussing bigotry by mail to AI folk.

I have a problem in furthering my discussion. When I mentioned it I
got the same response from 2 of my 3 AI folk, and am waiting for the
same one from the third.  I gather it is a fundamental AI sort of
problem.

I maintain that 'a problem' and 'a discription of a problem' are not
the same thing. Thus 'discrimination' is a problem, but the word
'nigger' is not. 'Nigger' is a word which describes the problem of
discrimination. One may decide not to use the word 'nigger' but
abolishing the word only gets rid of one discription of the problem,
but not the problem itself.

If there were no words to express discrimination, and discrimination 
existed, then words would be created (or existing words would be 
perverted) to express discrimination. Thus language can be counted 
upon to reflect the attitudes of society, but changing the language is
not an effective way to change society.


This position is not going over very well. I gather that there is some
section of the AI community which believes that language (the
description of a problem) *is* the problem.  I am thus reduced to
saying, "oh no it isnt't you silly person" but am left holding the bag
when they start quoting from texts. I can bring out anthropology and
linguistics and they can get out some epistomology and Knowledge
Representation, but the discussion isn't going anywhere...

can anybody out there help?

laura creighton
utzoo!utcsstat!laura


[I have yet to be convinced that morality, ethics, and related aspects
of linguistics are of general interest to AIList readers.  While I
have (and desire) no control over the net.ai discussion, I am
responsible for what gets passed on to the Arpanet.  Since I would
like to screen out topics unrelated to AI or computer science, I may
choose not to pass on some of the net.ai submissions related to
bigotry.  Contact me at AIList-Request@SRI-AI if you wish to discuss
this policy. -- KIL]

------------------------------

Date: 25 Aug 1983 1625-PDT
From: Jay <JAY@USC-ECLC>
Subject: [flamer@ida-no: Re:  Turing Test; Parry, Eliza, and Flamer]

Is this a human response??

j'
                ---------------

  Return-path: <flamer%umcp-cs%UMCP-CS@UDel-Relay>
  Received: from UDEL-RELAY by USC-ECLC; Thu 25 Aug 83 16:20:32-PDT
  Date:     25 Aug 83 18:31:38 EDT  (Thu)
  From: flamer@ida-no
  Return-Path: <flamer%umcp-cs%UMCP-CS@UDel-Relay>
  Subject:  Re:  Turing Test; Parry, Eliza, and Flamer
  To: jay@USC-ECLC
  In-Reply-To: Message of Tue, 16-Aug-83 17:37:00 EDT from
      JAY%USC-ECLC@sri-unix.UUCP <4325@sri-arpa.UUCP>
  Via:  UMCP-CS; 25 Aug 83 18:55-EDT

        From: JAY%USC-ECLC@sri-unix.UUCP

        . . . Flamer would read messages from the net and then
        reply to the sender/bboard denying all the person said,
        insulting him, and in general making unsupported statements.
        . . .

  Boy! Now that's the dumbest idea I've heard in a long time. Only an
  idiot such as yourself, who must be totally out of touch with reality,
  could come up with that. Besides, what would it prove?  It's not much
  of an accomplishment to have a program which is stupider than a human.
  The point of the Turing test is to demonstrate a program that is as
  intelligent as a human. If you can't come up with anything better,
  stay off the net!

------------------------------

End of AIList Digest
********************

∂30-Aug-83  1825	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #51
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Aug 83  18:22:27 PDT
Date: Tuesday, August 30, 1983 4:30PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #51
To: AIList@SRI-AI


AIList Digest           Wednesday, 31 Aug 1983     Volume 1 : Issue 51

Today's Topics:
  Expert Systems - Availability & Dissent,
  Automatic Translation - State of the Art,
  Fifth Generation - Book Review & Reply
----------------------------------------------------------------------

Date: 26 Aug 83 17:00:18-PDT (Fri)
From: decvax!ittvax!dcdwest!benson @ Ucb-Vax
Subject: Expert Systems
Article-I.D.: dcdwest.216

I would like to know whether there are commercial expert
systems available for sale.  In particular, I would like to
know about systems like the Programmer's Apprentice, or other
such programming aids.

Thanks in advance,

Peter Benson
!decvax!ittvax!dcdwest!benson

------------------------------

Date: 26 Aug 83 11:12:31-PDT (Fri)
From: decvax!genrad!mit-eddie!rh @ Ucb-Vax
Subject: bulstars
Article-I.D.: mit-eddi.656

from AP (or NYT?)


       COMPUTER TROUBLESHOOTER:
       'Artificially Intelligent' Machine Analyses Phone Trouble

           WASHINGTON - Researchers at Bell Laboratories say
       they've developed an ''artificially intelligent'' computer
       system that works like a highly trained human analyst to
       find troublespots within a local telephone network. Slug
       PM-Bell Computer. New, will stand. 670 words.

Oh, looks like we beat the Japanese :-( Why weren't we told that
'artificial intelligence' was about to exist?  Does anyone know if
this is the newspaper's fault, or if the guy they talked to just
wanted more attention???


-- Randwulf
(Randy Haskins);
Path= genrad!mit-eddie!rh
or... rh@mit-ee (via mit-mc)

------------------------------

Date: Mon 29 Aug 83 21:36:04-CDT
From: Jonathan Slocum <LRC.Slocum@UTEXAS-20.ARPA>
Subject: claims about "solving NLP"

I have never been impressed with claims about "solving the Natural
Language Processing problem" based on `solutions' for 1-2 paragraphs
of [usu. carefully (re)written] text.  There are far too many scale-up
problems for such claims to be taken seriously.  How many NLP systems
are there that have been applied to even 10 pages of NATURAL text,
with the full intent of "understanding" (or at least "treating in the
identical fashion") ALL of it?  Very few.  Or 100 pages?  Practically
none.  Schank & Co.'s "AP wire reader," for example, was NOT intended
to "understand" all the text it saw [and it didn't!], but only to 
detect and summarize the very small proportion that fell within its
domain -- a MUCH easier task, esp. considering its miniscule domain
and microscopic dictionary.  Even then, its performance was -- at best
-- debatable.

And to anticipate questions about the texts our MT system has been
applied to:  about 1,000 pages to date -- NONE of which was ever
(re)written, or pre-edited, to affect our results.  Each experiment
alluded to in my previous msg about MT was composed of about 50 pages
of natural, pre-existing text [i.e., originally intended and written
for HUMAN consumption], none of which was ever seen by the project
linguists/programmers before the translation test was run.  (Our 
dictionaries, by the way, currently comprise about 10,000 German
words/phrases, and a similar number of English words/phrases.)

We, too, MIGHT be subject to further scale-up problems -- but we're a
damned sight farther down the road than just about any other NLP
project has been, and have good reason to believe that we've licked
all the scale-up problems we'll ever have to worry about.  Even so, we
would NEVER be so presumptuous as to claim to have "solved the NLP
problem," needing only a large collection of `linguistic rules' to
wrap things up!!!  We certainly have NOT done so.

REALLY, now...

------------------------------

Date: Mon 29 Aug 83 17:11:26-CDT
From: Jonathan Slocum <LRC.Slocum@UTEXAS-20.ARPA>
Subject: Machine Translation - a very short tutorial

Before proclaiming the impossibility of automatic [i.e., computer]
translation of human languages, it's perhaps instructive to know
something about how human translation IS done -- and is not done -- at
least in places where it's taken seriously.  It is also useful,
knowing this, to propose a few definitions of what may be counted as
"translation" and -- more to the point -- "useful translation."
Abbreviations: MT = Machine Translation; HT = Human Translation.

To start with, the claim that "a real translator reads and understands
a text, and then generates [the text] in the [target] language" is
empty.  First, NO ONE really has anything like a good idea of HOW
humans translate, even though there are schools that "teach
translation."  Second, all available evidence indicates that (point #1
notwithstanding), different humans do it differently.  Third, it can
be shown (viz simultaneous interpreters) that nothing as complicated
as "understanding" need take place in all situations.  Fourth, 
although the contention that "there generally aren't 1-1
correspondences between words, phrases..."  sounds reasonable, it is
in fact false an amazing proportion of the time, for languages with
similar derivational histories (e.g., German & English, to say nothing
of the Romance languages).  Fifth, it can be shown that highly
skilled, well-respected technical-manual translators do not always (if
ever) understand the equipment for which they're translating manuals
[and cannot, therefore, be argued to understand the original texts in 
any fundamentally deep sense] -- and must be "understanding" in a
shallower, probably more "linguistic" sense (one perhaps more
susceptible to current state-of-the-art computational treatment).

Now as to how translation is performed in practice.  One thing to
realize here is that, at least outside the U.S. [i.e., where
translation is taken seriously and where almost all of it is done], NO
HUMAN performs "unrestricted translation" -- i.e., human translators
are trained in (and ONLY considered competent in) a FEW AREAS.
Particularly in technical translation, humans are trained in a limited
number of related fields, and are considered QUITE INCOMPETENT outside
those fields.  Another thing to realize is that essentially ALL
TRANSLATIONS ARE POST-EDITED.  I refer here not to stylistic editing,
but to editing by a second translator of superior skill and
experience, who NECESSARILY refers to the original document when
revising his subordinate's translation.  The claim that MT is
unacceptable IF/BECAUSE the results must be post-edited falls to the
objection that HT would be unacceptable by the identical argument.
Obviously, HT is not considered unacceptable for this reason -- and
therefore, neither should MT.  All arguments for acceptablility then
devolve upon the question of HOW MUCH revision is necessary, and HOW
LONG it takes.

Happily, this is where we can leave the territory of pontifical
pronouncements (typically utterred by the un- or ill-informed), and
begin to move into the territory of facts and replicable experiments.
Not entirely, of course, since THERE IS NO SUCH THINGS AS A PERFECT
TRANSLATION and, worse, NO ONE CAN DEFINE WHAT CONSTITUTES A GOOD
TRANSLATION.  Nevertheless, professional post-editors are regularly
saddled with the burden of making operational decisions about these
matters ("Is this sufficiently good that the customer is likely to 
understand the text?  Is it worth my [company's] time to improve it
further?").  Thus we can use their decisions (reflected, e.g., in
post-editing time requirements) to determine the feasibility of MT in
a more scientific manner; to wit: what are the post-editing
requirements of MT vs. HT?  And in order to assess the economic
viability of MT, one must add: taking all expenses into account, is MT
cost-effective [i.e., is HT + human revision more or less expensive
than MT + human revision]?

Re: these last points, our experimental data to date indicate that (1)
the absolute post-editing requirements (i.e., something like "number
of changes required per sentence") for MT are increased w.r.t. HT
[this is no surprise to anyone]; (2) paradoxically, post-editing time
requirements of MT is REDUCED w.r.t. HT [surprise!]; and (3) the
overall costs of MT (including revision) are LESS than those for HT
(including revision) -- a significant finding.

We have run two major experiments to date [with our funding agency
collecting the data, not the project staff], BOTH of which produced
these results; the more recent one naturally produced better results
than the earlier one, and we foresee further improvements in the near
future.  Our finding (2) above, which SEEMS inconsistent with finding
(1), is explainable with reference to the sociology of post-editing
when the original translator is known to be human, and when he will
see the results (which probably should, and almost always does,
happen).  Further details will appear in the literature.

So why haven't you heard about this, if it's such good news?  Well,
you just did!  More to the point, we have been concentrating on
producing this system more than on writing papers about it [though I
have been presenting papers at COLING and ACL conferences], and
publishing delays are part of the problem [one reason for having
conferences].  But more papers are in the works, and the secret will
be out soon enough.

------------------------------

Date: 26 Aug 83  1209 PDT
From: Jim Davidson <JED@SU-AI>
Subject: Fifth Generation (Book Review)

                 [Reprinted from the SCORE BBoard.]

14 Aug 8
by Steven Schlossstein
(c) 1983 Dallas Morning News (Independent Press Service)

    THE FIFTH GENERATION: Artificial Intelligence and Japan's Computer
Challenge to the World. By Edward Feigenbaum and Pamela McCorduck 
(Addison-Wesley, $15.55).

    (Steven Schlossstein lived and worked in Japan with a major Wall 
Street firm for more than six years. He now runs his own Far East 
consulting firm in Princeton, N.J. His first novel, ''Kensei,-' which 
deals with the Japanese drive for industrial supremacy in the high 
tech sector, will be published by Congdon & Weed in October).

    ''Fukoku Kyohei'' was the rallying cry of Meiji Japan when that 
isolated island country broke out of its self-imposed cultural cocoon 
in 1868 to embark upon a comprehensive plan of modernization to catch 
up with the rest of the world.
    ''Rich Country, Strong Army'' is literally what is meant.  
Figuratively, however, it represented Japan's first experimentation 
with a concept called industrial policy: concentrating on the 
development of strategic industries - strategic whether because of 
their connection with military defense or because of their importance 
in export industries intended to compete against foreign products.
    Japan had to apprentice herself to the West for a while to bring
it off.
    The military results, of course, were impressive. Japan defeated 
China in 1895, blew Russia out of the water in 1905, annexed Korea and
Taiwan in 1911, took over Manchuria in 1931, and sat at the top of the
Greater East Asia Co-Prosperity Sphere by 1940. This from a country
previously regarded as barbarian by the rest of the world.
    The economic results were no less impressive. Japan quickly became
the world's largest shipbuilder, replaced England as the world's 
leading textile manufacturer, and knocked off Germany as the premier 
producer of heavy industrial machinery and equipment. This from a 
country previously regarded as barbarian by the rest of the world.
    After World War II, the Ministry of Munitions was defrocked and 
renamed the Ministry of International Trade and Industry (MITI), but 
the process of strategy formulation remained the same.
    Only the postwar rendition was value-added, and you know what 
happened. Japan is now the world's No. 1 automaker, produces more 
steel than anyone else, manufactures over half the TV sets in the 
world, is the only meaningful producer of VTRs, dominates the 64K 
computer chip market, and leads the way in one branch of computer 
technology known as artificial intelligence (AI). All this from a 
country previously regarded as barbarbian by the rest of the world.
    What next for Japan? Ed Feigenbaum, who teaches computer science
at Stanford and pioneered the development of AI in this country, and 
Pamela McCorduck, a New York-based science writer, write that Japan is
trying to dominate AI research and development.
    AI, the fifth generation of computer technology, is to your
personal computer as your personal computer is to pencil and paper. It
is based on processing logic, rather than arithmetic, deals in 
inferences, understands language and recognizes pictures. Or will. It 
is still in its infancy. But not for long; last year, MITI established
the Institute for New Generation Computer Technology, funded it
aggressively, and put some of the country's best brains to work on AI.
    AI systems consist of three subsystems: a knowledge base needed
for problem solving and understanding, an inference subsystem that 
determines what knowledge is relevant for solving the problem at hand,
and an interaction subsystem that facilitates communication between
the overall system and its user - between man and machine.
    Now America does not have a MITI, does not like industrial policy,
has not created an institute to work on AI, and is not even convinced 
that AI is the way to go. But Feigenbaum and McCorduck argue that even
if the Japanese are not successful in developing the fifth generation,
the spin-off from this 10-year project will be enormous, with
potentially wide applications in computer technology, 
telecommunications, industrial robotics, and national defense.
    ''The Fifth Generation'' walks you through AI, how and why Japan 
puts so much emphasis on the project, and how and why the Western 
nations have failed to respond to the challenge. National defense 
implications alone, the authors argue, are sufficient to justify our 
taking AI seriously.
    Smart bombs and laser weapons are but advanced wind-up toys
compared with the AI arsenal of the future. The Pentagon has a little
project called ARPA - Advanced Research Projects Agency - that has
been supporting AI small-scale, but not with the people or funding the
authors feel is meaningful.
    Unfortunately, ''The Fifth Generation'' suffers from some 
organizational defects. You don't really get into AI and how its 
complicated systems operate until you're almost halfway through the 
book. And the chapter on industrial policy - from which all 
technological blessings flow - is only three pages long. It's also at 
the back of the book instead of up front, where it belongs.
    But the issues are highlighted well by experts who are not only 
knowledgeable about AI but who are concerned about our lack of 
response to yet another challenge from Japan. The author's depiction 
of the drivenness of the Japanese is especially poignant. It all boils
down to national survival.
    Japan no longer is in a position of apprenticeship to the West.
                       [Begin garbage]
The D B LD LEAJE OW IN A EMBARRUSSINOF STRATEGIC INDUSDRIES. EAgain1u
2, with few exceptions and shampoo, but it's not trying harder - if at
all.
                        [End garbage]
mount an effective reaponse to the Japanese challenge? ''The
Fifth Generation'' doesn't think so, and for compelling reasons. Give
it a read.
    END

------------------------------

Date: Fri 26 Aug 83 15:40:16-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM>
Subject: Re: Fifth Generation (Book Review)

                 [Reprinted from the SCORE BBoard.]

Anybody who says the Japanese are *leading* in "one branch of computer
technology known as artificial intelligence" is out to lunch.  And by
what standards is DARPA describable as small?  And what is all this
BirdSong about other countries failing to "respond to the challenge"?
Hasn't this turkey read the Alvey report?  Hasn't he noticed France's
vigorous encouragement of their domestic computer industry?  Who in
America is not "convinced that AI is the way to go" (this was true of
the leadership in Britain until the Alvey report came out, I admit)
and what are they doing to hinder AI work?  Does he think 64k RAMs are
the only things that go into computers?  Does he, incidentally, know
that AI has had plenty of pioneers outside of the HPP?

More to the point, most of you know about the wildly over-optimistic
promises that were made in the 60's on behalf of AI, and what happened
in their wake.  Whipping up public hysteria is a dangerous game,
especially when neither John Q. Public nor Malcolm Forbes himself can
do very much about the 5GC project, except put pressure on the local
school board to teach the kids some math and science.
                                                        - Richard

------------------------------

End of AIList Digest
********************

∂31-Aug-83  1538	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #52
Received: from SRI-AI by SU-AI with TCP/SMTP; 31 Aug 83  15:34:47 PDT
Date: Wednesday, August 31, 1983 2:12PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #52
To: AIList@SRI-AI


AIList Digest           Wednesday, 31 Aug 1983     Volume 1 : Issue 52

Today's Topics:
  Bibliograpy - Vision
----------------------------------------------------------------------

Date: Tue, 30 Aug 83 15:26:12 EDT
From: Morton A. Hirschberg <mort@brl-bmd>
Subject: Vision Bibliograpy

I have two hundred references from DTIC and NTIS on vision.  The list 
is not complete by any means since I am looking at scene analysis and 
algorithms.  References are more or less from the last ten years with 
few 1982-83 items.  Shown are title, authors, AD number, and 
publication date.  Hopes this helps some.  Mort.

[I have reformatted the entries and sorted them by author.  For files
of this size (about 20K characters), I find that it hassles the fewest
people if I just send it out instead of sending FTP instructions.
-- KIL]


GJ Agin, Representation and Description of Curved Objects, AD755139, 
Oct 72.

N Ahuja & A Rosenfeld & RM Haralick, Neighbor Gray Levels as Features 
in Pixel Classification, , 80.

N Ahuja, Mosaic Models for Image Analysis and Synthesis, ADA050100, 
Nov 77.

JO Amoss, A Syntax-Directed Method of Extracting Topological Regions 
from a Silhouette, ADA045944, Jul 77.

HC Andrews (Project Director), Image Understanding Research, 
ADA054091, Mar 78.

HC Andrews (Project Director), Image Understanding Research, 
ADA046214, Sep 77.

Anonymous, Annual Report 1980, N81-27841, Jan 81.

Anonymous, Automatic Scene Analysis, N81-12776, Nov 79.

Anonymous, Optical Array Processor, ADA118371, Jul 82.

K Arbter, Erkennung und Vermessung von Konturen mit Hilfe der 
Fouriertransformation, ADB061321, Sep 81.

A Baldwin & R Greenblatt & J Holloway & T Knight & D Moon & D Weinreb,
LISP Machine Progress Report, ADA062178, Aug 77.

DH Ballard, Parameter Networks: Towards a Theory of Low-Level Vision, 
ADA101216, Apr 81.

AG Barto & RS Sutton, Goal Seeking Components for Adaptive 
Intelligence: An Initial Assessment, ADA101476, Apr 81.

LS Baumann ed., Image Understanding, ADA052900, Apr 77.

LS Baumann ed., Image Understanding, ADA084764, Apr 80.

LS Baumann ed., Image Understanding, ADA098261, Apr 81.

LS Baumann ed., Proceedings: Image Understanding Workshop, ADA052902, 
May 78.

LS Baumann ed., Proceedings: Image Understanding Workshop, ADA064765, 
Nov 78.

BL Bean & WL Flowers & WM Gutman & AV Jeliek & RL Spellicy, Laser, IR 
and NMMW Propagation Measurements and Analyses, ADB055523L, Feb 80.

B Bhanu, Shape Matching and Image Segmentation Using Stochastic 
Labeling, ADA110033, Aug 81.

GA Biecker & DS Paden & JL Potter, Feature Tagging, ADA091691, Apr 80.

HD Block & NJ Nilsson & RW Duda, Determination and Detection of 
Features in Patterns, AD427840, Dec 63.

M Brady, Computational Approaches to Image Understanding, ADA108191, 
Oct 81.

A Broder & A Rosenfeld, Gradient Magnitude as an Aid in Color Pixel 
Classification, ADA091995, Jun 80.

RA Brooks, Symbolic Reasoning Among 3-D Models and 2-D Images, 
ADA110316, Jun 81.

J Bryant & LF Guseman Jr., Basic Research in Mathematical Pattern 
Recognition and Image Analysis, N81-23561, Jan 81.

BL Bullock, Unstructured Control and Communication Processes in Real 
World Scene Analysis, ADA049458, Oct 77.

GJ Burton, Contrast Discrimination by the Human Visual System, 
ADA104181, May 81.

B Carrigan, Pattern Recognition and Image Processing: Citations from 
NTIS Aug 77 - Jul 79, PB80814221, Aug 80.

R Cederberg, Chain-Link Coding and Segmentation for Raster Scan 
Devices, N79-17129, Nov 78.

A Celmins, A Manual for General Least Squares Model Fitting, 
ADB040229L, Jun 79.

I Chakravarty, A Generalized Line and Junction Labelling Scheme with 
Applications to Scene Analysis, PB278073, Dec 77.

I Chakravarty, A Survey of Current Techniques for Computer Vision, 
PB268385, Jan 77.

R Chellappa, On an Estimation Scheme for Gauss Markov Random Field 
Models, ADA102057, Apr 81.

CH Chen, A Comparative Evaluation of Statistical Image Segmentation 
Techniques, ADA094237, Jan 81.

CH Chen, Image Processing, ADA095552, Feb 81.

CH Chen, Research Progress on Image Segmentation, ADA101827, Jul 81.

CH Chen, Some New Results on Image Processing and Recognition, 
ADA055862, Jun 78.

PW Cheng, A Psychophysical Approach to Form Perception:  
Incompatibility as an Explanation of Integrality, ADA087607, Jul 80.

LS Coles & B Raphael & RO Duda & CA Rosen & TD Garvey & RA Yates & JH 
Munson, Application of Intelligent Automata to Reconnaissance, 
AD868871, Nov 69.

SA Cook & TP Harrington & H Toffer, Digital-Image Processing Improves 
Man-Machine Communication at a Nuclear Reactor, UNI-SA-98, Aug 82.

JL Crowley, A Representation for Visual Information, ADA121443, Nov 
81.

S Cushing and L Vaina, Further Progress in Knowledge Representation 
for Image Understanding, ADA098416, Mar 81.

DARPA, Proceedings: Image Understanding Workshop, ADA052901, Oct 77.

SM Dunn, Generalized Blomqvist Correlation, ADA102058, Apr 81.

CR Dyer, Memory-Augmented Cellular Automata for Image Analysis, 
ADA065328, Nov 78.

JO Eklundh, Studies of Some Algorithms for Digital Picture Processing,
N81-14656, 81.

J Fain & D Gorlin & F Hayes-Roth & S Rosenschein & H Sowizral & D 
Waterman, The ROSIE Language Reference Manual, ADA111025, Dec 81.

JJ Fasano & TS Huang, Feature Dimensionality Reduction Through Use of 
the Karhunen-Love Transform in a Multisensor Pattern Recognition 
System, ADB057184, May 81.

CL Forgy, OPS5 User's Manual, ADA106558, Jul 81.

G Fowler & RM Haralick & FG Gray & C Feustel & C Grinstead, Efficient 
Graph Automorphism by Vertex Partitioning, , 83.

MS Fox, Reasoning with Incomplete Knowledge in a Resource-Limited 
Environment: Integrating Reasoning and Knowledge Acquisition, 
ADA102285, Mar 81.

H Freeman, Shape Description Via the Use of Critical Points, 
ADA040273, Jun 77.

BR Frieden, Image Processing, ADA095075, Feb 81.

DD Garber, Computational Models for Texture Analysis and Synthesis, 
ADA102470, May 81.

Inc., Geo-Centers, Inc., A Review of Three-Dimensional Vision for 
Robotics, ADA118055, May 82.

AP Ginsburg, Perceptual Capabilities, Ambiguities and Artifacts in Man
and Machine, ADA109864, 81.

RC Gonzalez, Evaluation of the Chitra Character Recognition System and
Development of Feature Extraction Algorithms, ADB059991L, May 80.

GD Hadden, A Cellular Automata Approach to Computer Vision and Image 
Processing, ADA096569, Sep 80.

SE Haehn & D Morris, OLPARS VI (On-Line Pattern Analysis and 
Recognition System), ADA118732, Jun 82.

SE Haehn & D Morris, OLPARS VI (On-Line Pattern Analysis and 
Recognition System), ADA118733, Jun 82.

EL Hall & RC Gonzalez, Multi-Sensor Scene Synthesis and Analysis, 
ADA110812, Sep 81.

EL Hall & W Frei & RY Wong, Scene Content Analysis Program - Phase II,
ADA045624, Jul 77.

RM Haralick & D Queeney, Understanding Engineering Drawings, , 82.

RM Haralick & GL Elliott, Increasing Tree Search Efficiency for 
Constraint Satisfaction Problems, , 80.

RM Haralick & LG Shapiro, Decomposition of Polygonal Shapes by 
Clustering, , .

RM Haralick & LG Shapiro, The Consistent Labeling Problem: Part I, , 
Apr 79.

RM Haralick & LG Shapiro, The Consistent Labeling Problem: Part II, , 
May 80.

RM Haralick & LT Watson, A Facet Model for Image Data, , 81.

RM Haralick & LT Watson & TJ Laffey, The Topographic Primal Sketch, , 
83.

RM Haralick, An Interpretation for Probabilistic Relaxation, , 83.

RM Haralick, Edge and Region Analysis for Digital Image Data, , 80.

RM Haralick, Ridges and Valleys on Digital Images, , 83.

RM Haralick, Scene Analysis, Homomorphism, and Consistent Labeling 
Problem Algorithms, ADA082058, Jan 80.

RM Haralick, Some Neighborhood Operators, , 81.

RM Haralick, Statistical and Structural Approaches to Texture, , May 
79.

RM Haralick, Structural Pattern Recognition, Arrangements and Theory 
of Covers, , .

RM Haralick, Using Perspective Transformations in Scene Analysis, , 
80.

F Hayes-Roth & D Gorlin & S Rosenschein & H Sowizral & D Waterman, 
Rationale and Motivation for ROSIE, ADA111018, Nov 81.

CA Hlavka & RM Haralick & SM Carlyle & R Yokoyama, The Discrimination 
of Winter Wheat Using a Growth-State Signature, , 80.

YC Ho and AK Agrawala, On Pattern Classification Algorithms - 
Introduction and Survey, AD667728, Mar 68.

JM Hollerbach, Hierarchical Shape Description of Objects by Selection 
and Modification of Prototypes, ADA024970, Nov 75.

BR Hunt, Automation of Image Processing, ADA111029, May 81.

NE Huston Jr., Shift and Scale Invariant Preprocessor, ADA114519, Dec 
81.

RA Jarvis, Computer Image Segmentation: First Partitions Using Shared 
Near Neighbor Clustering, PB277929, Dec 77.

RA Jarvis, Computer Image Segmentation: Structured Merge Strategies, 
PB277930, Dec 77.

HA Jenkinson, Image Processing Techniques for Automatic Target 
Detection, ADB055686L, Mar 81.

LN Kanal, Pattern Analysis & Modeling, ADA070961, Apr 79.

MD Kelly, Visual Identification of People by Computer, AD713252, Jul 
70.

CE Kim, On Cellular Straight Line Segments, ADA089511, Jul 80.

CE Kim, Three-Dimensional Digital Line Segments, ADA106813, Aug 81.

RL Kirby & A Rosenfeld, A Note on the Use of (Gray Level, Local 
Average Gray Level) Space as an Aid in Threshold Selection, ADA065695,
Jan 79.

L Kitchens & A Rosenfeld, Edge Evaluation Using Local Edge Coherence, 
ADA109564, Dec 80.

AH Klopf, Evolutionary Pattern Recognition Systems, AD637492, Nov 65.

WA Kornfeld, The Use of Parallelism to Implement a Heuristic Search, 
ADA099184, Mar 81.

E Kowler, Eye Movement and Visual Information Processing, ADA112399, 
Dec 81.

S Krusemark & RM Haralick, An Operating System Interface for 
Transportable Image Processing Software, , 83.

FP Kuhl & CR Giardina & OR Mitchell & DJ Charpentier, 
Three-Dimensional Object Recognition Using N-Dimensional Chain Codes, 
ADA119011, Mar 82.

R LaPado & C Reader & L Hubble, Image Processing Displays: A Report on
Commercially Available State-of-the-Art Features, ADA097226, Aug 78.

BA Lambird & D Lavine & LN Kanal, Interactive Knowledge-Based 
Cartographic Feature Extraction, ADB061479L, Oct 81.

BA Lambird & D Lavine & GC Stockman & KC Hayes & LN Kanal, Study of 
Digital Matching of Dissimilar Images, ADA102619, Nov 80.

M Lebowitz, Generalization and Memory in an Integrated Understanding 
System, ADA093083, Oct 80.

T Lozano-Perez, Spatial Planning: A Configuration Space Approach, 
ADA093934, Dec 80.

AV Luizov & NS Fedorova, Illumination and Visual Information, 
ADB056076L, Mar 81.

WI Lundgren, Scene Analysis, ADA115603, Dec 81.

D Marr and HK Nishihara, Representation and Recognition of the Spatial
Organization of Three Dimensional Shapes, ADA031882, Aug 76.

D Marr and S Ullman, Directional Selectivity and Its Use in Early 
Visual Processing, ADA078054, Jun 79.

D Marr, The Low-Level Symbolic Representation of Intensity Changes in 
an Image, ADA013669, Dec 74.

WN Martin and JK Aggarwal, Dynamic Scene Analysis: The Study of Moving
Images, ADA042124, Jan 77.

WN Martin and JK Aggarwal, Survey: Dynamic Scene Analysis, ADA060536, 
78.

J McCarthy & T Binford & C Green & D Luckham & Z Manna ed L Earnest, 
Recent Research in Artificial Intelligence and Foundations of 
Programming, ADA066562, Sep 78.

JL McClelland & DE Rumelhart, An Interactive Activation Model of the 
Effect of Context in Perception Part II, ADA090189, Jul 80.

C McCormick, Strategies for Knowledge-Based Image Interpretation, 
ADA115914, May 82.

KG Mehrotra, Some Observations in Pattern Recognition, ADA113382, Feb 
82.

DL Milgram & A Rosenfeld & T Willett & G Tisdale, Algorithms and 
Hardware Technology for Image Recognition, ADA057191, Mar 78.

DL Milgram & DJ Kahl, Recursive Region Extraction, ADA049591, Dec 77.

DL Milgram, Region Extraction Using Convergent Evidence, ADA061591, 
Jun 78.

M Minsky, K-Lines: A Theory of Memory, ADA078116, Jun 79.

OR Mitchell & FP Kuhl & TA Grogan & DJ Charpentier, A Shape Extraction
and Recognition System, , Mar 82.

CB Moler & GW Stewart, An Efficient Matrix Factorization for Digital 
Image Processing, LA-7637-MS, Jan 79.

MG Moran, Image Analysis, ADA066732, Mar 79.

JL Muerle, Project PARA: Perceiving and Recognition Automata, AD33137,
Dec 63.

GK Myers & RE Twogood, An Algorithm for Enhancing Low-Contrast Details
in Digital Images, UCID-18015, Nov 78.

NTIS, Pattern Recognition and Image Processing Aug 1980-Nov 1981, 
PB82803453, Jan 82.

PM Narendra & BL Westover, Advanced Pattern-Matching Techniques for 
Autonomous Acquisition, ADB059773L, Jan 81.

WP Nelson, Learning Game Evaluation Functions with a Compound Linear 
Machine, ADA085710, Mar 80.

NJ Nilsson & B Raphael & S Wahlstrom, Application of Intelligent 
Automata to Reconnaissance, AD841509, Jun 68.

et. al., NJ Nilsson & CA Rosen & B Raphael, et. al., Application of 
Intelligent Automata to Reconnaissance, AD849872, Feb 69.

NJ Nilsson, A Framework for Artificial Intelligence, ADA068188, Mar 
79.

S Nyberg, On Image Restoration and Noise Reduction with Respect to 
Subjective Criteria, N81-30847, 81.

JV Oldfield, A Special-Purpose Processor for an Automatic Feature 
Extraction System, ADA090789, Aug 80.

JS Ostrem & HD Crane, Automatic Handwriting Verification (AHV), 
ADA111329, Nov 81.

CC Parma & AR Hanson & EM Riseman, Experiments in Schema-Driven 
Interpretation of a Natural Scene, ADA085780, Apr 80.

WA Pearlman, A Visual System Model and a New Distortion Measure in the
Context of Image Processing, PB274534, Jul 77.

T Peli, An Algorithm for Recognition and Localization of Rotated and 
Scaled Objects, ADA102920, Jul 80.

M Pietikainen & A Rosenfeld, Edge-Based Texture Measures, ADA102060, 
May 81.

LJ Pinson & JP Lankford, Research on Image Enhancement Algorithms, 
ADA103216, May 81.

T Poggio & HK Nishihara & KRK Nielsen, Zero-Crossing and 
Spatiotemporal Interpolation in Vision: Aliasing and Electric Coupling
Between Sensors, ADA117608, May 82.

T Poggio, Marr's Approach to Vision, ADA104198, Aug 81.

JM Prager, Extracting and Labelling Boundary Segments in Natural 
Scenes (Revised and Updated), ADA060042, Sep 78.

RC Prather and LM Uhr, Discovery and Learning Techniques for Patern 
Recognition, AD610725, Nov 64.

R Reddy and A Rosenfeld, Final Report on Workshop on Control 
Structures and Knowledge Representation for Image and Speech 
Understanding, ADA076563, Apr 79.

WC Rice & JS Shipman & RJ Spieler, Interactive Digital Image 
Processing Investigation Phase II, ADA087518, Apr 80.

W Richards & K Dismukes, Vision Research for Flight Simulation, 
ADA118721, Jul 82.

W Richards & KA Stevens, Efficient Computations and Representations of
Visual Surfaces, ADA089832, Dec 79.

CA Rosen and NJ Nilsson, Application of Intelligent Automata to 
Reconnaissance, AD820989, Sep 67.

S Rosenberg, Understanding in Incomplete Worlds, ADA062364, May 78.

A Rosenfeld & DL Milgram, Algorithms and Hardware Technology for Image
Recognition, ADA041906, Jul 77.

A Rosenfeld, Cellular Architectures for Pattern Recognition, 
ADA117049, Apr 82.

A Rosenfeld, Image Understanding Using Overlays, ADA086513, May 80.

A Rosenfeld, On Connectivity Properties of Grayscale Pictures, 
ADA108602, Sep 81.

A Rosenfeld, Pebble, Pushdown, and Parallel-Sequential Picture 
Acceptors, ADA051857, Feb 78.

JM Rubin & WA Richards, Color Vision and Image Intensities: When Are 
Changes Material?, ADA103926, May 81.

W Rutkowski, Shape Completion, ADA047682, Aug 77.

EC Seed & HJ Siegel, The Use of Database Techniques in the 
Implementation of a Syntactic Pattern Recognition Task on a Parallel 
Reconfigurable Machine, ADA113934, Dec 81.

S Seeman, FIPS Software for Fast Fourier Transform, Filtering and 
Image Rotation, N79-17594, Oct 78.

LG Shapiro & RM Haralick, A Spatial Data Structure, , 80.

LG Shapiro & RM Haralick, Organization of Relational Models for Scene 
Analysis, , Nov 82.

LG Shapiro & RM Haralick, Structural Descriptions and Inexact 
Matching, , Sep 81.

JE Shore & RM Gray, Minimum Cross-Entropy Pattern Classification and 
Cluster Analysis, ADA086158, Apr 80.

DW Small, Image Processing Program Completion Report, ADA061597, Aug 
78.

DA Smith, Using Enhanced Spherical Images for Object Representation, 
ADA078065, May 79.

DR Smith, On the Computational Complexity of Branch and Bound Search 
Strategies, ADA081608, Nov 79.

BE Soland & PM Narendra & RC Fitch & DV Serreyn & TG Kopet, Prototype 
Automatic Target Screener, ADA060849, Jun 78.

BE Soland & PM Narendra & RC Fitch & DV Serreyn & TG Kopet, Prototype 
Automatic Target Screener, ADA060850, Sep 78.

AJ Stenger & TA Zimmerlin & JP Thomas & M Braunstein, Advanced 
Computer Image Generation Techniques Exploting Perceptual 
Characteristics, ADA103365, Aug 81.

KA Stevens, Surface Perception from Local Analysis of Texture and 
Contour, ADA084803, Feb 80.

GC Stockman & BA Lambird & D Lavine & LN Kanal, Knowledge-Based Image 
Analysis, ADA101319, Apr 81.

GC Stockman & SH Kopstein, The Use of Models in Image Analysis, 
ADA067166, Jan 79.

TM Strat, A Numerical Method for Shape-From-Shading from a Single 
Image, ADA063071, Jan 79.

LT Suminski Jr. & PH Hulin, Computer Generated Imagery (CGI) Current 
Technology and Cost Measures Feasibility Study, ADA091636, Sep 80.

P Szolovits & WA Martin, Brand X Manual, ADA093041, Nov 80.

J Taboada, Coherent Optical Methods for Applications in Robot Visual 
Sensing, ADA110107, 81.

JM Tenenbaum & MA Fischler & HC Wolf, A Scene Analysis Approach to 
Remote Sensing, N79-13438, Jun 78.

U Maryland, Algorithms and Hardware Technology for Image Recognition, 
ADA049590, Oct 77.

S Ullman, The Interpretation of Structure from Motion, ADA062814, Oct 
76.

et. al., SA Underwood, et. al., Visual Learning and Recognition by 
Computer, AD752238, Apr 72.

L Vaina & S Cushing, Foundation of a Knowledge Representation System 
for Image Understanding, ADA095992, Oct 80.

FMDA Vilnrotter, Structural Analysis of Natural Textures, ADA110032, 
Sep 81.

HF Walker, The Mean-Square Error Optimal Linear Discriminant Function 
and Its Application to Incomplete Data Vectors, N79-21827, Feb 79.

S Wang & AY Wu & A Rosenfeld, Image Approximation from Grayscale 
"Medial Axes", ADA091993, May 80.

S Wang & DB Elliott & JB Campbell & RW Erich & RM Haralick, Spatial 
Reasoning in Remotely Sensed Data, , Jan 83.

LT Watson & RM Haralick & OA Zuniga, Constrained Transform Coding and 
Surface Fitting, , May 83.

OA Wehmanen, Pure Pixel Classification Software, N81-11689, JUL 80.

D Weinreb & D Moon, Flavors: Message Passing in the LISP Machine, 
ADA095523, Nov 80.

R Weyhrauch, Prolegomena to a Theory of Formal Reasoning, ADA065698, 
Dec 78.

TD Williams, Computer Interpretation of a Dynamic Image from a Moving 
Vehicle, ADA107565, May 81.

PH Winston & RH Brown editors, Progress in Artificial Intelligence 
1978 Volume 1, ADA068838, 79.

PH Winston & RH Brown eds., Progress in Artificial Intelligence 1978 
Volume 2, ADA068839, 79.

JW Woods, Markov Image Modeling, ADA066078, Oct 78.

AY Wu & T Hong & A Rosenfeld, Threshold Selection Using Quadtrees, 
ADA090245, Mar 80.

VA Yakubovich, Machines That Can Learn to Recognize Patterns, 
AD618643, 63.

JK Yan & DJ Sakrison, Encoding of Images Based on a Two-Component 
Source Model, ADA051033, Nov 77.

Y Yasuoka & RM Haralick, Peak Noise Removal by a Facet Model, , 83.

C Yen, An Image Processing Software Package, ADA101072, Jun 81.

C Yen, On the Use of Fisher's Linear Discriminant for Image 
Segmentation, ADA091591, Nov80.

R Yokoyama & RM Haralick, Texture Pattern Image Generation by Regular 
Markov Chain, , 79.

LA Zadeh, Theory of Fuzziness and Its Application to Information 
Processing and Decision-Making, ADA064598, Oct 76.

AL Zobrist and WB Thompson, Building a Distance Function for Gestalt 
Grouping, ADA015435, 75.

------------------------------

End of AIList Digest
********************

∂02-Sep-83  1043	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #53
Received: from SRI-AI by SU-AI with TCP/SMTP; 2 Sep 83  10:42:04 PDT
Date: Thursday, September 1, 1983 2:02PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #53
To: AIList@SRI-AI


AIList Digest             Friday, 2 Sep 1983       Volume 1 : Issue 53

Today's Topics:
  Conferences - AAAI-83 Attendance & Logic Programming,
  AI Publications - Artificial Intelligence Journal & Courseware,
  Artificial Languages - LOGLAN,
  Lisp Availbility - PSL & T,
  Automatic Translation - Ada Request,
  NL & Scientific Method - Rebuttal,
  Intelligence - Definition
----------------------------------------------------------------------

Date: 31 Aug 83 0237 EDT
From: Dave.Touretzky@CMU-CS-A
Subject: AAAI-83 registration

The actual attendance at AAAI-83 was about 2000, plus an additional
1700 people who came only for the tutorials.  This gives a total of
3700.  While much less than the 7000 figure, it's quite a bit larger
than last year's attendance.  Interest in AI seems to be growing
rapidly, spurred partly by media coverage, partly by interest in
expert systems and partly by the 5th generation thing.  Another reason
for this year's high attendance was the Washington location.  We got
tons of government people.

Next year's AAAI conference will be hosted by the University of Texas
at Austin.  From a logistics standpoint, it's much easier to hold a
conference in a hotel than at a university.  Unfortunately, I'm told
there are no hotels in Austin big enough to hold us.  Such is the
price of growth.

-- Dave Touretzky, local arrangements committee member, AAAI-83 & 84

------------------------------
Date: Thu 1 Sep 83 09:15:17-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Logic Programming Symposium

This is a reminder that the September 1 deadline for submissions to
the IEEE Logic Programming Symposium, to be held in Atlantic City,
New Jersey, February 6-9, 1984, has now all but arrived.  If you are
planning to submit a paper, you are urged to do so without further
delay.  Send ten double-spaced copies to the Technical Chairman:

	Doug DeGroot, IBM Watson Research Center
	PO Box 218, Yorktown Heights, NY 10598

------------------------------

Date: Wed, 31 Aug 83 12:10 PDT
From: Bobrow.PA@PARC-MAXC.ARPA
Subject: Subscriptions to the Artificial Intelligence Journal

   Individual (non institutions) belonging to the AAAI, to SIGART or
to AISB can receive a reduced rate personal subscription to the
Artificial Intelligence Journal.  To apply for a subscription, send a
copy of your membership form with a check for $50 (made out to
Elsevier) to:
        Elsevier Science Publishers
        Attn: John Tagler
        52 Vanderbilt Avenue
        New York, New York 10017
North Holland (Elsevier) will acknowledge receipt of the request for
subscription, and provide information about which issues will be
included in your subscription, and when they should arrive.  Back
issues are not available at the personal rate.

Artificial Intelligence, an International journal, has been the
journal of record for the field of Artificial Intelligence since
1970.  Articles for submission should be sent (three copies) to Dr.
Daniel G. Bobrow, Editor-in-chief, Xerox Palo Alto Research Center,
3333 Coyote Hill Road, Palo Alto, California 94304, or to Prof.
Patrick J. Hayes, Associate Editor, Computer Science Department,
University of Rochester, Rochester N.Y. 14627.


danny bobrow

------------------------------

Date: 31 Aug 1983 17:10:40 EDT (Wednesday)
From: Marshall Abrams <abrams at mitre>
Subject: College-level courseware publishing

I have learned that Addison-Wesley is setting up a new
courseware/software operation and are looking for microcomputer
software packages at the college level.  I think the idea is for a
student to be able to go to the bookstore and buy a disk and
instruction manual for a specific course.

Further details on request.

------------------------------

Date: 29 Aug 1983 2154-PDT
From: VANBUER@USC-ECL
Subject: Re: LOGLAN

[...]

The Loglan institute is in the middle of a year long "quiet spell" 
After several years of experiments with sounds, patching various small
logical details (e.g. two unambiguous ways to say "pretty little 
girls"'s two interpretations), the Institute is busily preparing
materials on the new version, preparing to "go public" again in a
fairly big way.
        Darrel J. Van Buer

------------------------------

Date: 30 Aug 1983 0719-MDT
From: Robert R. Kessler <KESSLER@UTAH-20>
Subject: re: Lisps on 68000's


     Date: 24 Aug 83 19:47:17-PDT (Wed)
     From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
     Subject: Re: Lisps on 68000's - (nf)
     Article-I.D.: uiucdcs.2626

     ....

     I think PSL is definitely a superior lisp for the 68000, but I
     have no idea whether is will be available for non-HP machines...


     Jordan Pollack
     University of Illinois
     ...pur-ee!uiucdcs!uicsl!pollack

Yes, PSL is available for other 68000's, particularly the Apollo.  It
is also being released for the DecSystem-20 and Vax running 4.x Unix.
Send queries to

Cruse@Utah-20

Bob.

------------------------------

Date: Tue, 30 Aug 1983  14:32 EDT
From: MONTALVO@MIT-OZ
Subject: Lisps on 68000's


    From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
    Subject: Re: Lisps on 68000's - (nf)
    Article-I.D.: uiucdcs.2626

    I played with a version of PSL on a HP 9845 for several hours one
    day.  The environment was just like running FranzLisp under Emacs
    in ...

A minor correction so people don't get confused:  it was probably an 
HP 9836 not an HP 9845.  I've used both machines including PSL on the 
36, and doubt very much that PSL runs on a 45.

------------------------------

Date: Wed, 31 Aug 83 01:25:29 EDT
From: Jonathan Rees <Rees@YALE.ARPA>
Subject: Re: Lisps on 68000's


    Date: 19 Aug 83 10:52:11-PDT (Fri)
    From: harpo!eagle!allegra!jdd @ Ucb-Vax
    Subject: Lisps on 68000's
    Article-I.D.: allegra.1760

    ...  T sounds good, but the people who are saying it's
    great are the same ones trying to sell it to me for several
    thousand dollars, so I'd like to get some more disinterested
    opinions first.  The only person I've talked to said it was
    awful, but he admits he used an early version.

T is distributed by Yale for $75 to universities and other non-profit 
organizations.

Yale has not yet decided on the means by which it will distribute T to
for-profit institutions, but it has been negotiating with a few 
companies, including Cognitive Systems, Inc.  To my knowledge no final
agreements have been signed, so right now, no one can sell it.

"Supported" versions will be available from commercial outfits who are
willing to take on the extra responsibility (and reap the profits?),
but unsupported versions will presumably still be available directly
from Yale.

Regardless of the final outcome, no company or companies will have 
exclusive marketing rights.  We do not want a high price tag to
inhibit availability.

                        Jonathan Rees
                        T Project
                        Yale Computer Science Dept.

P.S. As a regular T user, I can say that it is a good system.  As its 
principal implementor, I won't claim to be disinterested.
Testimonials from satisfied users may be found in previous AILIST
digests; perhaps you can obtain back issues.

------------------------------

Date: 1 Sep 1983 11:58-EDT
From: Dan Hoey <hoey@NRL-AIC>
Subject: Translation into Ada:  Request for Info

It is estimated that the WMCCS communications system will require five
years to translate into Ada.  Not man-years, but years; if the
staffing is assumed to exceed two hundred then we are talking about a
man-millenium for this task.

Has any work been done on mechanical aids for translating programs
into Ada?  I seek pointers to existing and past projects, or
assurances that no work has been done in this area.  Any pointers to
such information would be greatly appreciated.

To illustrate my lack of knowledge in this field, the only work I have
heard of for translating from one high-level language to another is 
UniLogic's translator for converting BLISS to PL/1.  As I understand 
it, their program only works on the Scribe document formatter but
could be extended to cover other programs.  I am interested in hearing
of other translators, especially those for translating into
strongly-typed languages.

Dan Hoey HOEY@NRL-AIC.ARPA

------------------------------

Date: Wed 31 Aug 83 18:42:08-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Solutions of the natural language analysis problem

Given the downhill trend of some contributions on natural language 
analysis in this group, this is my last comment on the topic, and is
essentially an answer to Stan the leprechaun hacker (STLH for short).

I didn't "admit" that grammars only reflect some aspects of language.
(Using loaded verbs such as "admit" is not conducive to the best 
quality of discussion.)  I just STATED THE OBVIOUS. The equations of 
motion only reflect SOME aspects of the material world, and yet no 
engineer goes without them. I presented this point at greater length 
in my earlier note, but the substantive presentation of method seems 
to have gone unanswered. Incidentally, I worked for several years in a
civil engineering laboratory where ACTUAL dams and bridges were 
designed, and I never saw there the preference for alchemy over 
chemistry that STLH suggests is the necessary result of practical 
concerns. Elegance and reproduciblity do not seem to be enemies of 
generality in other scientific or engineering disciplines.  Claiming 
for AI an immunity from normal scientific standards (however flawed 
...) is excellent support for our many detractors, who may just now be
on the deffensive because of media hype, but will surely come back to 
the fray, with that weapon plus a long list of unfulfilled promises 
and irreproducible "results."

Lack of rigor follows from lack of method. STLH tries to bludgeon us
with "generating *all* the possible meanings" of a sentence.  Does he
mean ALL of the INFINITY of meanings a sentence has in general? Even
leaving aside model-theoretic considerations, we are all familiar with

        he wanted me to believe P so he said P
        he wanted me to believe not P so he said P because he thought
           that I would think that he said P just for me to believe P
           and not believe it
        and so on ...

in spy stories.

The observation that "we need something that models human cognition 
closely enough..." begs the question of what human cognition looks 
like. (Silly me, it looks like STLH's program, of course.)  STLH also 
forgets that is often better for a conversation partner (whether man 
or machine) to say "I don't understand" than to go on saying "yes, 
yes, yes ..." and get it all wrong, as people (and machines) that are 
trying to disguise their ignorance do.

It is indeed not surprising that "[his] problems are really concerned 
with the acquisition of linguistic knowledge." Once every grammatical 
framework is thrown out, it is extremely difficult to see how new 
linguistic knowledge can be assimilated, whether automatically or even
by programming it in. As to the notion that "everyone is an expert on 
the native language", it is similar to the claim that everyone with 
working ears is an expert in acoustics.

As to "pernicious behavior", it would be better if STLH would first 
put his own house in order: he seems to believe that to work at SRI 
one needs to swear eternal hate to the "Schank camp" (whatever that 
is); and useful criticism of other people's papers requires at least a
mention of the title and of the objections. A bit of that old battered
scientific protocol would help...

Fernando Pereira

------------------------------

Date: Tue, 30 Aug 1983  15:57 EDT
From: MONTALVO@MIT-OZ
Subject: intelligence is...

    Date: 25 Aug 1983 1448-PDT
    To: AIList at MIT-MC
    From: Jay <JAY@USC-ECLC>
    Subject: intelligence is...

      An intelligence must have at least three abilities; To act; To
    perceive, and classify (as one of: better, the same, worse) the
    results of its actions, or the environment after the action; and
    lastly To change its future actions in light of what it has
    perceived, in attempt to maximize "goodness", and avoid "badness".
    My views are very obviously flavored by behaviorism.

Where do you suppose the evolutionary cutoff is for intelligence?  By
this definition a Planaria (flatworm) is intelligent.  It can learn a
simple Y maze.

I basically like this definition of intelligence but I think the 
learning part lends itself to many degrees of complexity, and 
therefore, the definition leads to many degrees of intelligence.  
Maybe that's ok.  I would like to see an analysis (probably NOT on 
AIList, althought maybe some short speculation might be appropriate) 
of the levels of complexity that a learner could have.  For example, 
one with a representation of the agent's action would be more 
complicated (therefore, more intelligent) than one without.  Probably 
a Planaria has no representation of it's actions, only of the results 
of its actions.

------------------------------

End of AIList Digest
********************

∂09-Sep-83  1317	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #54
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Sep 83  13:16:53 PDT
Date: Friday, September 9, 1983 9:02AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #54
To: AIList@SRI-AI


AIList Digest             Friday, 9 Sep 1983       Volume 1 : Issue 54

Today's Topics:
  Robotics - Walking Robot,
  Fifth Generation - Book Review Discussion,
  Methodology - Rational Psychology,
  Lisp Availability - T,
  Prolog - Lisp Based Prolog, Foolog
----------------------------------------------------------------------

Date: Fri 2 Sep 83 19:24:59-PDT
From: John B. Nagle <NAGLE@SU-SCORE.ARPA>
Subject: Strong, agile robot

                 [Reprinted from the SCORE BBoard.]

     There is a nice article in the current Robotics Age about an
outfit down in Anaheim (not Disney) that has built a six-legged robot
with six legs spaced radially around a circular core.  Each leg has
three motors, and there are enough degrees of freedom in the system to
allow the robot to assume various postures such as a low, tucked one 
for tight spots; a tall one for looking around, and a wide one for
unstable surfaces.  As a demonstration, they had the robot climb into
the back of a pickup truck, climb out, and then lift up the truck by
the rear end and move the truck around by walking while lifting the
truck.4

     It's not a heavy AI effort; this thing is a teleoperator
controlled by somebody with a joystick and some switches (although it
took considerable computer power to make it possible for one joystick
to control 18 motors in such a way that the robot can walk faster than
most people).  Still, it begins to look like walking machines are
finally getting to the point where they are good for something.  This
thing is about human sized and can lift 900 pounds; few people can do
that.

------------------------------

Date: 3 Sep 83 12:19:49-PDT (Sat)
From: harpo!eagle!mhuxt!mhuxh!mhuxr!mhuxv!akgua!emory!gatech!pwh@Ucb-Vax
Subject: Re: Fifth Generation (Book Review)
Article-I.D.: gatech.846

In response to Richard Treitel's comments about the Fifth Generation
book review recently posted:

        *This* turkey, for one, has not heard of the "Alvey report."
        Do tell...

I believe that part of your disagreement with the book reviewer stems
from the fact that you seem to be addressing different audiences. He,
a concerned but ignorant lay-audience; you, the AI Intelligensia on
the net.

phil hutto


CSNET pwh@gatech
INTERNET pwh.gatech@udel-relay
UUCP ...!{allegra, sb1, ut-ngp, duke!mcnc!msdc}!gatech!pwh


p.s. - Please do elaborate on the Alvey Report. Sounds fascinating.

------------------------------

Date: Tue 6 Sep 83 14:24:28-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Re: Fifth Generation (Book Review)

Phil,

I wish I were in a position to elaborate on the Alvey Report.  Here's
all I know, as relayed by a friend of mine who is working back in
Britain:

As a response to either (i) the challenge/promise of the Information
Era or (ii) the announcement of a major Japanese effort to develop AI
systems, Mrs.  Thatcher's government commissioned a Commission,
chaired by some guy named Alvey about whom I don't know anything
(though I suspect he is an academic of some stature, else he wouldn't
have been given the job).  The mission of this Commission (or it may
have been a Committee) was to produce recommendations for national
policy, to be implemented probably by the Science and Engineering 
Research Council.  They found that while a few British universities
are doing quite good computer science, only one of them is doing AI
worth mentioning, namely Edinburgh, and even there, not too much of
it.  (The reason for this is that an earlier Government commissioned
another Report on AI, which was written by Professor Sir James
Lighthill, an academic of some stature.  Unfortunately he is a
mathematician specialising in fluid dynamics -- said to have designed 
Concorde's wings, or some such -- and he concluded that the only bit
of decent work that had been done in AI to date was Terry Wingorad's
thesis (just out) and that the field showed very little promise.  As a
result of the Lighthill Report, AI was virtually a dirty word in
Britain for ten years.  Most people still think it means artificial
insemination.)  Alvey's group also found, what anyone could have told
the Government, that research on all sorts of advanced science and
technology was disgracefully stunted.  So they recommended that a few
hundred million pounds of state and industrial funds be pumped into 
research and education in AI, CS, and supporting fields.  This
happened about a year ago, and the Gov't basically bought the whole
thing, with the result that certain segments of the academic job
market over there went straight from famine to feast (the reverse
change will occur pretty soon, I doubt not).  It kind of remains to be
seen what industry will do, since we don't have a MITI.

I partly accept your criticism of my criticism of that review, but I
also believe that a journalist has an obligation not to publish
falsehoods, even if they are generally believed, and to do more than
re-hash the output of his colleagues into a form consistent with the
demands of the story he is "writing".

                                        - Richard

------------------------------

Date: Sat 3 Sep 83 13:28:36-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Rational Psychology

I've just read Jon Doyle's paper "Rational Psychology" in the latest 
AI Magazine. It's one of those papers you wish you (I wish) had 
written it yourself. The paper shows implictly what is wrong with many
of the arguments in discussions on intelligence and language analysis 
in this group. I am posting this as a starting shot in what I would 
like to be a rational discussion of methodology. Any takers?

Fernando Pereira

PS. I have been a long-time fan of Truesdell's rational mechanics and 
thermodynamics (being a victim of "black art" physics courses). Jon 
Doyle's emphasis on Truesdell's methodology is for me particularly 
welcome.


[The article in question is rather short, more of an inspirational
pep talk than a guide to the field.  Could someone submit one
"rational argument" or other exemplar of the approach?  Since I am
not familiar with the texts that Doyle cites, I am unable to discern
what he and Fernando would like us to discuss or how they would have
us go about it. -- KIL]

------------------------------

Date: 2 Sep 1983 11:26-PDT
From: Andy Cromarty <andy@aids-unix>
Subject: Availability of T


     Yale has not yet decided on the means by which it will distribute
     T to for-profit institutions, but it has been negotiating with a
     few companies, including Cognitive Systems, Inc.  To my knowledge
     no final agreements have been signed, so right now, no one can sell
     it.  ...We do not want a high price tag to inhibit availability.

        -- Jonathan Rees, T Project (REES@YALE) 31-Aug-83

About two days before you sent this to the digest, I received a
14-page T licensing agreement from Yale University's "Office of
Cooperative Research".

Prices ranged from $1K for an Apollo to $5K for a VAX 11/780 for
government contractors (e.g. us), with no software support or
technical assistance.  The agreement does not actually say that
sources are provided, although that is implied in several places. A
rather murky trade secret clause was included in the contract.

It thus appears that T is already being marketed.  These cost figures,
however, are approaching Scribe territory.  Considering (a) the cost
of $5K per VAX CPU, (b) the wide variety of alternative LISPs
available for the VAX, and (c) the relatively small base of existing T
(or Scheme) software, perhaps Yale does "want a high price tag to
inhibit availability" after all....
                                                        asc

------------------------------

Date: Thursday, 1 September 1983 12:14:59 EDT
From: Brad.Allen@CMU-RI-ISL1
Subject: Lisp Based Prolog

                 [Reprinted from the Prolog Digest.]

I would like to voice disagreement with Fernando Pereira's implication
that Lisp Based Prologs are good only for pedagogical purposes. The
flipside of efficiency is usability, and until there are Prolog
systems with exploratory programming environments which exhibit the
same features as, say Interlisp-D or Symbolics machines, there will be
a place for Lisp Based Prologs which can use such features as, E.g., 
bitmap graphics and calls to packages in other languages.  Lisp Based
Prologs can fill the void between now and the point when software
accumulation in standard Prolog has caught up to that of Lisp ( if it
ever does ).

------------------------------

Date: Sat 3 Sep 83 10:51:22-PDT
From: Pereira@SRI-AI
Subject: Prolog in Lisp

                 [Reprinted from the Prolog Digest.]

Relying on ( inferior ) Prologs in Lisp is the best way of not 
contributing to Prolog software accumulation. The large number of 
tools that have been built at Edinburgh show the advantages for the 
whole Prolog community of sites 100% committed to building everything 
in Prolog.  By far the best debugging environment for Prolog programs 
in use today is the one on the DEC-10/20 system, and that is written 
entirely in Prolog. Its operation is very different, and much superior
for Prolog purposes, than all Prolog debuggers built on top of Lisp 
debuggers that I have seen to date. Furthermore, integrating things 
like screen management into a Prolog environment in a graceful way is 
a challenging problem ( think of how long it took until flavors came 
up as the way of building the graphics facilities on the MIT Lisp 
machines ), which will also advance our understanding of computer 
graphics ( I have written a paper on the subject, "Can drawing be 
liberated from the von Neumann style?" ).

I am not saying that Prologs in Lisp are not to be used ( I use one 
myself on the Symbolics Lisp machines ), but that a large number of 
conceptual and language advances will be lost if we don't try to see 
environmental tools in the light of logic programming.

-- Fernando Pereira

------------------------------

Date: Mon, 5 Sep 1983  03:39 EDT
From: Ken%MIT-OZ@MIT-MC
Subject: Foolog

                 [Reprinted from the Prolog Digest.]

In Pereira's introduction to Foolog [a misunderstanding; see the next
article -- KIL] and my toy interpreter he says:

     However, such simple interpreters ( even the
     Abelson and Sussman one which is far better than
     PiL ) are not a sufficient basis for the claim
     that "it is easy extend Lisp to do what Prolog
     does." What Prolog "does" is not just to make
     certain deductions in a certain order, but also
     make them very fast. Unfortunately, all Prologs in
     Lisp I know of fail in this crucial aspect ( by
     factors between 30 and 1000 ).

I never claim for my little interpreter that it was more than a toy.
It primary value is pedagogic in that it makes the operational
semantics of the pure part of Prolog clear.  Regarding Foolog, I
would defend it in that it is relatively complete;

-- it contains cut, bagof, call, etc. and for i/o and arithmetic his
primitive called "lisp" is adequate.  In the introduction he claims
that its 75% of the speed of the Dec 10/20 Prolog interpreter.  If
that makes it a toy then all but 2 or 3 Prolog implementations are
non-toy.

[Comment: I agree with Fernando Pereira and Ken that there are lots
and again lots of horribly slow Prologs floating around. But I do not
think that it is impossible to write a fast one in Lisp, even on a
standard computer. One of the latest versions of the Foolog
interpreters is actually slightly faster than Dec-10 Prolog when
measuring LIPS.  The Foolog compiler I am working on compiled
naive-reverse to half the speed of compiled Dec-10 Prolog ( including
mode declarations ).  The compiler opencodes unification, optimizes
tail recursion and uses determinism, and the code fits in about three
pages ( all of it is in Prolog, of course ).  -- Martin Nilsson]

I tend to agree that too many claims are made for "one day wonders".
Just because I can implement most of Prolog in one day in Lisp
doesn't mean that the implentation is any good.  I know because I
started almost two years ago with a very tiny implementation of
Prolog in Lisp.  As I started to use it for serious applications it
grew to the point where today its up to hundreds of pages of code (
the entire source code for the system comes to 230 Tops20 pages ).
The Prolog runs on Lisp Machines ( so we call it LM-Prolog ).  Mats
Carlsson here in Uppsala wrote a compiler for it and it is a serious
implementation.  It runs naive reverse of a list 30 long on a CADR in
less than 80 milliseconds (about 6250 Lips).  Lambdas and 3600s
typically run from 2 to 5 times faster than Cadrs so you can guess
how fast it'll run.

Not only is LM-Prolog fast but it incorporates many important
innovations.  It exploits the very rich programming environment of
Lisp Machines.  The following is a short list of its features:

User Extensible Interpreter
Extensible unification for implementing
E.g. parallelism and constraints

Optimizing Compiler
Open compilation Tail recursion removal and
automatic detection of determinacy Compiled
unification with microcoded runtime support
Efficient bi-directional interface to Lisp

Database Features
User controlled indexing Multiple databases
(Worlds)

Control Features
Efficient conditionals Demand-driven
computation of sets and bags

Access To Lisp Machine
Features Full programming environment, Zwei
editor, menus, windows, processes, networks,
arithmetic ( arbitrary precision, floating,
rational and complex numbers, strings,
arrays, I/O streams )

Language Features
Optional occur check Handling of cyclic
structures Arbitrary parity

Compatability Package
Automatic translation from DEC-10 Prolog
to LM-Prolog

Performance
Compiled code up to 6250 LIPS on a CADR
Interpreted code; up to 500 LIPS

Availability
LM-Prolog currently runs on LMI CADRs
and Symbolics LM-2s.  Soon to run on
Lambdas.

Commercially Available Soon.
For more information contact
Kenneth M. Kahn or Mats Carlsson.

Inquires can be directed to:

KEN@MIT-OZ   or

UPMAIL P. O. Box 2059
       S-75002
       Uppsala, Sweden

Phone  +46-18-111925

------------------------------
Date: Tue 6 Sep 83 15:22:25-PDT
From: Pereira@SRI-AI
Subject: Misunderstanding

                 [Reprinted from the PROLOG Digest.]

I'm sorry that my first note on Prologs in Lisp was construed as a 
comment on Foolog, which appeared in the same Digest.  In fact, my 
note was send to the digest BEFORE I knew Ken was submitting Foolog.  
Therefore, it was not a comment on Foolog.  As to LM-Prolog, I have a 
few comments about its speed:

1. It depends essentially on the use of Lisp machine subprimitives and
a microcoded unification, which are beyond Lisp the language and the 
Lisp environment in all but the MIT Lisp machines.  It LM-Prolog can 
be considered as "a Prolog in Lisp," then DEC-10/20 Prolog is a Prolog
in Prolog ...

2. To achieve that speed in determinate computation requires mapping 
Prolog procedure calls into Lisp function calls, which leaves 
backtracking in the lurch. The version of LM-Prolog I know of used 
stack group switches for bactracking, which is orders of magnitude 
slower than backtracking on the DEC-20 system.

3. Code compactness is sacrificed by compiling from Prolog into Lisp 
with open-coded unification. This is important because it makes worse 
the paging behavior of large programs.

There are a lot of other issues in estimating the "real" efficiency of
Prolog systems, such as GC requirements and exact TRO discipline.  For
example, using CONS space for runtime Prolog data structures is a 
common technique that seems adequate when testing with naive reverse 
of a 30 long list, but appears hopeless for programs that build 
structure and backtrack a lot, because CONS space is not stack 
allocated ( unless you use certain nonportable tricks, and even 
then... ), and therefore is not reclaimed on backtracking ( one might 
argue that Lisp programs for the same task have the same problem, but 
efficient backtracking is precisely one of the major advantages of 
good Prolog implementations ).

The current Lisp machines have exciting environment tools from which 
Prolog users would like to benefit.  I think that building Prolog 
systems in Lisp will hit artificial performance and language barriers 
much before the actual limits of the hardware employed are reached.  
The approach I favor is to take the latest developments in Prolog 
implementation and use them to build Prolog systems that coexist with 
Lisp on those machines, but use all the hardware resources.  I think 
this is possible with a bit of cooperation from manufacturers, and I 
have reasons to hope this will happen soon, and produce Prolog systems
with a performance far superior to DEC-20 Prolog.

Ken's approach may produce a tolerable system in the short term, but I
don't think it can ever reach the performance and functionality which
I think the new machines can deliver.  Furthermore, there are big
differences between the requirements of experimental systems, with all
sorts of new goodies, and day-to-day systems that do the standard
things, but just much better.  Ken's approach risks producing a system
that falls between these (conflicting) goals, leading to a much larger
implementation effort than is needed just for experimenting with
language extensions ( most of the time better done in Prolog ) or just
for a practical system.

-- Fernando Pereira

PS:  For all it is worth, the source of DEC-20 Prolog is 177 pages of 
Prolog and 139 of Macro-10 (at 1 instruction per line...).  The system
comprises a full compiler, interpreter, debugger and run time system, 
not using anything external besides operating system I/O calls.  We
estimate it incorporates between 5 and 6 man years of effort.

According to Ken, LM-Prolog is 230 pages of Lisp and Prolog ...

------------------------------

End of AIList Digest
********************

∂09-Sep-83  1628	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #55
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Sep 83  16:27:56 PDT
Date: Friday, September 9, 1983 12:29PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #55
To: AIList@SRI-AI


AIList Digest           Saturday, 10 Sep 1983      Volume 1 : Issue 55

Today's Topics:
  Intelligence - Turing Test & Definitions,
  AI Environments - Computing Power & Social Systems
----------------------------------------------------------------------

Date: Saturday,  3 Sep 1983 13:57-PDT
From: bankes@rand-unix
Subject: Turing Tests and Definitions of Intelligence


As much as I dislike adding one more opinion to an overworked topic, I
feel compelled to make a comment on the ongoing discussion of the
Turing test.  It seems to me quite clear that the Turing test serves
as a tool for philosopical argument and not as a defining criterion.
It serves the purpose of enlightening those who would assert the
impossibility of any machine ever being intelligent.  The point is, if
a machine which would pass the test could be produced, then a person
would have either to admit it to be intelligent or else accept that
his definition of intelligence is something which cannot be perceived
or tested.

However, when the Turing test is used as a tool with which to think
about "What is intelligence?" it leads primarily to insights into the
psychology and politics of what people will accept as intelligent.
(This is a consequence of the democratic definition - its intelligent
if everybody agrees it is).  Hence, we get all sorts of distractions:
Must an intelligent machine make mistakes, should an intelligent
machine have emotions, and most recently would an intelligent machine
be prejudiced?  All of this deals with a sociological viewpoint on
what is intelligent, and gets us no closer to a fundamental
understanding of the phenomenon.

Intelligence is an old word, like virtue and honor.  It may well be
that the progress of our understanding will make it obsolete, the word
may come to suggest the illusions of an earlier time.  Certainly, it
is much more complex than our language patterns allow.  The Turing
test suggests it to be a boolean, you got it or you don't.  We
commonly use smart as a relational, you're smarter than me but we're
both smarter than rover.  This suggests intelligence is a scaler,
hence IQ tests.  But recent experience with IQ testing across cultures
together with the data from comparative psychology, would suggest that
intelligence is at least multi-dimensional.  Burrowing animals on the
whole do better at mazes than others.  Animals whose primary defense
is flight respond differently to aversive conditioning than do more
aggressive species.

We may have seen a recapitulation of this in the last twenty years
experience with AI.  We have moved from looking for the philosophers
stone, the single thing needed to make something intelligent, to
knowledge based systems.  No one would reasonably discuss (I think)
whether my program is smarter than yours.  But we might be able to say
that mine knows more about medicine than yours or that mine has more
capacity for discovering new relations of a specified type.

Thus I would suggest that the word intelligence (noun that it is,
suggesting a thing which might somehow be gotten ahold of) should be
used with caution.  And that the Turing test, as influential as it has
been, may have outlived its usefulness, at least for discussions among
the faithful.


                                -Steve Bankes
                                 RAND

------------------------------

Date: Sat, 3 Sep 83 17:07:33 EDT
From: "John B. Black" <Black@YALE.ARPA>
Subject: Learning Complexity

     There was recently a query on AIList about how to characterize
learning complexity (and saying that may be the crucial issue in
intelligence).  Actually, I have been thinking about this recently, so
I thought I would comment.  One way to characterize the learning
complexity of procedural skills is in terms of what kind of production
system is needed to perform the skill.  For example, the kind of
things a slug or crayfish (currently popular species in biopsychology)
can do seem characterizable by production systems with minimal
internal memory, conditions that are simple external states of the
world, and actions that are direct physical actions (this is
stimulus-response psychology in a nutshell).  However, human skills
(progamming computers, doing geometry, etc.)  need much more complex
production systems with complex networks as internal memories,
conditions that include variables, and actions that are mental in
addition to direct physical actions.  Of course, what form productions
would have to be to exhibit human-level intelligence (if indeed, they
can) is an open question and a very active field of research.

------------------------------

Date: 5 Sep 83 09:42:44 PDT (Mon)
From: woodson%UCBERNIE@Berkeley (Chas Woodson)
Subject: AI and computing power

Can you direct me to some wise comments on the following question?
Is the progress of AI being held up by lack of computing power?


[Reply follows. -- KIL]

There was a discussion of this on Human-Nets a year ago.
I am reprinting some of the discussion below.

My own feeling is that we are not being held back.  If we had
infinite compute power tomorrow, we would not know how to use it.
Others take the opposite view: that intelligence may be brute force
search, massive theorem proving, or large rule bases and that we are
shying away from the true solutions because we want a quick finesse.
There is also a view that some problems (e.g. vision) may require
parallel solutions, as opposed to parallel speedup of iterative
solutions.

The AI principal investigators seem to feel (see the Fall AI Magazine)
that it would be enough if each AI investigator had a Lisp Machine
or equivalent funding.  I would extend that a little further.  I think
that the biggest bottleneck right now is the lack of support staff --
systems wizards, apprentice programmers, program librarians, software
editors (i.e., people who edit other people's code), evaluators,
integrators, documentors, etc.  Could Lukas have made Star Wars
without a team of subordinate experts?  We need to free our AI
gurus from the day-to-day trivia of coding and system building just
as we use secretaries and office machines to free our management
personnel from administrative trivia.  We need to move AI from the
lone inventor stage to the industrial laboratory stage.  This is a
matter of social systems rather than hardware.

                                        -- Ken Laws

------------------------------

Date: Tuesday, 12 October 1982  13:50-EDT
From: AGRE at MIT-MC
Subject: artificial intelligence and computer architecture

   [Reprinted from HUMAN-NETS Digest, 16 Oct 1982, Vol. 5, No. 96]

A couple of observations on the theory that AI is being held back by
the sorry state of computer architecture.

First, there are three projects that I know of in this country that
are explicitly trying to deal with the problem.  They are Danny
Hillis' Connection Machine project at MIT, Scott Fahlman's NETL
machine at CMU, and the NON-VON project at Columbia (I can't
remember who's doing that one right offhand).

Second, the associative memory fad came and went very many years
ago.  The problem, simply put, is that human memory is a more
complicated place than even the hairiest associative memory chip.
The projects I have just mentioned were all first meant as much more
sophisticated approaches to "memory architectures", though they have
become more than that since.

Third, it is quite important to distinguish between computer
architectures and computational concepts.  The former will always
lag ten years behind the latter.  In fact, although our computer
architectures are just now beginning to pull convincingly out of the
von Neumann trap, the virtual machines that our computer languages
run on haven't been in the von Neumann style for a long time.  Think
of object-oriented programming or semantic network models or
constraint languages or "streams" or "actors" or "simulation" ideas
as old as Simula and VDL.  True these are implemented on serial
machines, but they evoke conceptions of computation more closer to
our ideas about how the physical world works, with notions of causal
locality and data flow and asynchronous communication quite
analogous to those of physics; one uses these languages properly not
by thinking of serial computers but by thinking in these more
general terms.  These are the stuff of everyday programming, at
least among the avant garde in the AI labs.

None of this is to say that AI's salvation isn't in computer
architecture.  But it is to say that the process of freeing
ourselves from the technology of the 40's is well under weigh.
(Yes, I know, hubris.)   - phiL

------------------------------

Date: 13 Oct 1982 08:34 PDT
From: DMRussell at PARC-MAXC
Subject: AI and alternative architectures

   [Reprinted from HUMAN-NETS Digest, 16 Oct 1982, Vol. 5, No. 96]

There is a whole subfield of AI growing up around parallel
processing models of computation.  It is characterized by the use of
massive compute engines (or models thereof) and a corresponding
disregard for efficiency concerns.  (Why not, when you've got n↑n
processors?)

"Parallel AI" is a result of a crossing of interests from neural
modelling,  parallel systems theory, and straightforward AI.
Currently, the most interesting work has been done in vision --
where the transformation from pixel data to more abstract
representations (e.g. edges, surfaces or 2.5-D data) via parallel
processing is pretty easy. There has been rather less success in
other, not-so-obviously parallel, fields.

Some work that is being done:

Jerry Feldman & Dana Ballard (University of Rochester)
        -- neural modelling, vision
Steve Small, Gary Cottrell, Lokendra Shastri (University of Rochester)
        -- parallel word sense and sentence parsing
Scott Fahlman (CMU) -- knowledge rep in a parallel world
??? (CMU) -- distributed sensor net people
Geoff Hinton (UC San Diego?) -- vision
Daniel Sabbah (IBM) -- vision
Rumelhart (UC San Diego) -- motor control
Carl Hewitt, Bill Kornfeld (MIT) -- problem solving

(not a complete list -- just a hint)

The major concerns of these people has been controlling the parallel
beasts they've created.  Basically, each of the systems accepts data
at one end, and then munges the data and various hypotheses about
the data until the entire system settles down to a single
interpretation.  It is all very messy, and incredibly difficult to
prove anything.  (e.g. Under what conditions will this system
converge?)

The obvious question is this: What does all of this alternative
architecture business buy you?  So far, I think it's an open
question.  Suggestions?

-- DMR --

------------------------------

Date: 13 Oct 1982 1120-PDT
From: LAWS at SRI-AI
Subject: [LAWS at SRI-AI: AI Architecture]


   [Reprinted from HUMAN-NETS Digest, 16 Oct 1982, Vol. 5, No. 96]

In response to Glasser @LLL-MFE:

I doubt that new classes of computer architecture will be the
solution to building artificial intelligence.  Certainly we could
use more powerful CPUs, and the new generation of LISP machines make
practical approaches that were merely feasibility demonstrations
before.  The fact remains that if we don't have the algorithms for
doing something with current hardware, we still won't be able to do
it with faster or more powerful hardware.

Associative memories have been built in both hardware and software.
See, for example, the LEAP language that was incorporated into the
SAIL language.  (MAINSAIL, an impressive offspring of SAIL, has
abandoned this approach in favor of subroutines for hash table
maintenance.)  Hardware is also being built for data flow languages,
applicative languages, parallel processing, etc.  To some extent
these efforts change our way of thinking about problems, but for the
most part they only speed up what we knew how to do already.

For further speculation about what we would do with "massively
parallel architectures" if we ever got them, I suggest the recent
papers by Dana Ballard and Geoffrey Hinton, e.g. in the Aug. ['82]
AAAI conference proceedings [...].  My own belief is that the "missing
link" to AI is a lot of deep thought and hard work, followed by VLSI
implementation of algorithms that have (probably) been tested using
conventional software running on conventional architectures.  To be
more specific we would have to choose a particular domain since
different areas of AI require different solutions.

Much recent work has focused on the representation of knowledge in
various domains: representation is a prerequisite to acquisition and
manipulation.  Dr. Lenat has done some very interesting work on a
program that modifies its own representations as it analyzes its own
behavior.  There are other examples of programs that learn from
experience.  If we can master knowledge representation and learning,
we can begin to get away from programming by full analysis of every
part of every algorithm needed for every task in a domain.  That
would speed up our progress more than new architectures.

[...]

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************

∂09-Sep-83  1728	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #56
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Sep 83  17:28:17 PDT
Date: Friday, September 9, 1983 3:36PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #56
To: AIList@SRI-AI


AIList Digest           Saturday, 10 Sep 1983      Volume 1 : Issue 56

Today's Topics:
  Professional Activities - JACM Referees & Inst. for Retraining in CS,
  Artificial Languages - Loglan,
  Knowledge Representation - Multiple Inheritance Query,
  Games - Puzzle & Go Tournament
----------------------------------------------------------------------

Date: 8 Sep 83 10:33:25 EDT
From: Sri <Sridharan@RUTGERS.ARPA>
Subject: referees for JACM (AI area)

Since the time I became the AI Area Editor for the JACM, I have found 
myself handicapped for lack of a current roster of referees.  This 
note is to ask you to volunteer to referee papers for the journal.

JACM is the major outlet for theoretical papers in computer science.  
In the area of AI most of the submissions in the past have ranged over
the topics of Automated Reasoning (Theorem Proving, Deduction, 
Induction, Default) and Automated Search (Search methods, state-space 
algorithms, And/Or reduction searches, analysis of efficiency and 
error and attendant tradeoffs).  Under my editorship I would like to 
broader the scope to THEORETICAL papers in all areas of AI, including 
Knowledge Representation, Learning, Modeling (Space, Time, Causality),
Problem Formulation & Reformulation etc.

If you are willing to be on the roster of referees, please send me a 
note with your name, mailing address, net-address and telephone 
number.  Please also list your areas of interest and competence.

If you wish to submit a paper please follow the procedures described 
in the "instructions to authors" page of the journal.  Copies of mss 
can be sent to either me or to the Editor-in-Chief.

N.S. Sridharan [Sridharan@Rutgers] Area Editor, AI JACM

------------------------------

Date: Wed, 7 Sep 83 16:06 PDT
From: Jeff Ullman <ullman@Diablo>
Subject: Institute for Retraining in CS

                [Reprinted from the SU-SCORE BBoard.]

A summer institute for retraining college faculty to teach computer
science is being held at Clarkson College, Potsdam, NY, this summer,
under the auspices of a joint ACM/MAA committee.  They need lecturers
in all areas of computer science, to deliver 1-month courses.  People
at or close to the PH. D. level are needed.  If interested, contact Ed
Dubinsky at 315-268-2382 (office) 315-265-2906 (home).

------------------------------

Date: 6 Sep 83 18:15:17-PDT (Tue)
From: harpo!gummo!whuxlb!pyuxll!abnjh!icu0 @ Ucb-Vax
Subject: Re: Loglan
Article-I.D.: abnjh.236

[Directed to Pourne@MIT-MC]


1. Rumor has it that SOMEONE at the Univ. of Washington (State of, NOT
D.C.)  was working on the [LOGLAN] grammar online (UN*X, as I recall).
I haven't yet had the temerity to post a general inquiry regarding
their locale. If they read your request and respond, please POST
it...some of us out here are also interested.

2. A friend of mine at Ohio State has typed in (by hand!) the glossary
from Vol 1 (the laymans grammar) which could be useful for writing a
"flashcard" program, but both of us are too busy.

                         Art Wieners
                         (who will only be at this addr for this week,
                          but keep your modems open for a resurfacing
                          at da Labs...)

------------------------------

Date: 7 Sep 83 16:43:58-PDT (Wed)
From: decvax!genrad!grkermit!chris @ Ucb-Vax
Subject: Re: Loglan 
Article-I.D.: grkermit.654

I just posted something relevant to net.nlang.  (I'm not sure which is
more appropriate, but I'm going to assume that "natural" language is
closer than all of Artificial Intelligence.)

I sent a request for information to the Loglan Institute, (Route 10,
Box 260 Gainesville, FL 32601 [a NEW address]) and they are just about
to go splashily public again.  I posted the first page of their reply
letter, see net.nlang for more details.  Later postings will cover
their short description of their Interactive Parser, which is among
their many new or improved offerings.

decvax!genrad!grkermit!chris
allegra!linus!genrad!grkermit!chris 
harpo!eagle!mit-vax!grkermit!chris

------------------------------

Date: 2-Sep-83 19:33 PDT
From: Kirk Kelley  <KIRK.TYM@OFFICE-2>
Subject: Multiple Inheritance query

Can you tell me where I can find a discussion of the anatomy and value
of multiple inheritance?  I wonder if it is worth adding this feature
to the design for a lay-person's language, called Players, for
specifying adventures.

 -- kirk

------------------------------

Date: 24 August 1983 1536-PDT (Wednesday)
From: Foonberg at AEROSPACE (Alan Foonberg)
Subject: Another Puzzle

                 [Reprinted from the Prolog Digest.]

I was glancing at an old copy of Games magazine and came across the 
following puzzle:

Can you find a ten digit number such that its left-most digit tells 
how many zeroes there are in the number, its second digit tells how 
many ones there are, etc.?

For example, 6210001000.  There are 6 zeroes, 2 ones, 1 two, no 
threes, etc. I'd be interested to see any efficient solutions to this
fairly simple problem. Can you derive all such numbers, not only
ten-digit numbers?  Feel free to make your own extensions to this
problem.

Alan

------------------------------

Date: 5 Sep 83 20:11:04-PDT (Mon)
From: harpo!psl @ Ucb-Vax
Subject: Go Tournament
Article-I.D.: harpo.1840


                          ANNOUNCING
                        The First Ever
                            USENIX
                           COMPUTER

                         #####  #######
                        #     # #     #
                        #       #     #
                        #  #### #     #
                        #     # #     #
                        #     # #     #
                         #####  #######

##### ####  #    # #####  #    #   ##   #    # ###### #    # #####
  #  #    # #    # #    # ##   #  #  #  ##  ## #      ##   #   #
  #  #    # #    # #    # # #  # #    # # ## # #####  # #  #   #
  #  #    # #    # #####  #  # # ###### #    # #      #  # #   #
  #  #    # #    # #   #  #   ## #    # #    # #      #   ##   #
  #   ####   ####  #    # #    # #    # #    # ###### #    #   #


              A B C D E F G H j K L M N O P Q R S T

          19  + + + + + + + + + + + + + + + + + + +  19
          18  + + + + + + + + + + + + + + + + + + +  18
          17  + + + O @ + + + + + + + + + + + + + +  17
          16  + + + O + + + O + @ + + + + + @ + + +  16
          15  + + + + + + + + + + + + + + + + + + +  15
          14  + + O O + + + O + @ + + + + + + + + +  14
          13  + + @ + + + + + + + + + + + + + + + +  13
          12  + + + + + + + + + + + + + + + + + + +  12
          11  + + + + + + + + + + + + + + + + + + +  11
          10  + + + + + + + + + + + + + + + + + + +  10
           9  + + + + + + + + + + + + + + + + + + +  9
           8  + + + + + + + + + + + + + O O O O @ +  8
           7  + + O @ + + + + + + + + + O @ @ @ @ @  7
           6  + + @ O O + + + + + + + + + O O O @ +  6
           5  + + O + + + + + + + + + + + + O @ @ +  5
           4  + + + O + + + + + + + + + + + O @ + +  4
           3  + + @ @ + @ + + + + + + + + @ @ O @ +  3
           2  + + + + + + + + + + + + + + + + + + +  2
           1  + + + + + + + + + + + + + + + + + + +  1

              A B C D E F G H j K L M N O P Q R S T


To be held during the Summer 1984 Usenix conference in Salt Lake
City, Utah.


Probable Rules
-------- -----

1)  The board will be 19 x 19.
This size was chosen rather than one of the smaller boards because
there is a great deal of accumulated Go "wisdom" that would be
worthless on smaller boards.

2) The board positions will be numbered as in the diagram above.  The
columns will be labeled 'A' through 'T' (excluding 'I') left to
right.  The rows will be labeled '19' through '1', top to bottom.

3) Play will continue until both programs pass in sequence.  This may
be a trouble spot, but looks like the best approach available.
Several alternatives were considered: (1) have the referee decide
when the game is over by identifying "uncontested" versus "contested"
area; (2) limit the game to a certain number of moves; all of them
had one or another unreasonable effect.

4) There will be a time limit for each program.  This will be in the
form of a limit on accumulated "user" time (60 minutes?).  If a
program goes over the time limit it will be allowed some minimum
amount of time for each move (15 seconds?).  If no move is generated
within the minimum time the game is forfeit.

5) The tournament will use a "referee" program to execute each
competing pair of programs; thus the programs must understand a
standard set of commands and generate output of a standard form.

    a) Input to the program.  All input commands to the program will
       be in the form of lines of text appearing on the standard
       input and terminated by a newline.
        1) The placement of a stone will be expressed as
           letter-number (e.g. "G7").  Note that the letter "I"
           is not included.
        2) A pass will be expressed as "pass".
        3) The command "time" means the time limit has been exceeded
           and all further moves must be generated within the shorter
           minimum time limit.
    b) Output from the program.  All output from the program will be
       in the form of lines of characters sent to the "standard
       output" (terminated by a newline) and had better be unbuffered.
        1) The placement of a stone will be expressed as
           letter-number, as in "G12".  Note that the letter "I"
           is not included.
        2) A pass will be expressed as "pass".
        3) Any other output lines will be considered garbage and
           ignored.
        4) Any syntactically correct but semantically illegal move
           (e.g. spot already occupied, ko violation, etc.) will be
           considered a forfeit.

The referee program will maintain a display of the board, the move
history, etc.

6) The general form of the tournament will depend on the number of
participants, the availability of computing power, etc.  If only a
few programs are entered each program will play every other program
twice.  If many are entered some form of Swiss system will be used.

7) These rules are not set in concrete ... yet; this one in
particular.


Comments, suggestions, contributions, etc. should be sent via uucp
to harpo!psl or via U.S. Mail to Peter Langston / Lucasfilm Ltd. /
P.O. Box 2009 / San Rafael, CA  94912.


For the record: I am neither "at Bell Labs" nor "at Usenix", but
rather "at" a company whose net address is a secret (cough, cough!).
Thus notices like this must be sent through helpful intermediaries
like Harpo.  I am, however, organizing this tournament "for" Usenix.

------------------------------

End of AIList Digest
********************

∂15-Sep-83  2007	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #57
Received: from SRI-AI by SU-AI with TCP/SMTP; 15 Sep 83  20:05:40 PDT
Date: Thursday, September 15, 1983 4:57PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #57
To: AIList@SRI-AI


AIList Digest            Friday, 16 Sep 1983       Volume 1 : Issue 57

Today's Topics:
  Artificial Intelligence - Public Recognition,
  Programming Languages - Multiple Inheritance & Micro LISPs,
  Query Systems - Talk by Michael Hess,
  AI Architectures & Prolog - Talk by Peter Borgwardt,
  AI Architectures - Human-Nets Reprints
----------------------------------------------------------------------

Date: 10 Sep 1983 21:44:16-PDT
From: Richard Tong <fuzzy1@aids-unix>
Subject: "some guy named Alvey"

John Alvey is Senior Director, Technology, at British Telecom.  The 
committee that he headed reported to the British Minister for 
Information Technology in September 1982 ("A Program for Advanced 
Information Technology", HMSO 1982).

The committee was formed in response to the announcement of the
Japanese 5th Generation Project at the behest of the British
Information Technology Industry.

The major recommendations were for increased collaboration within 
industry, and between industry and academia, in the areas of Software 
Engineering, VLSI, Man-Machine Interfaces and Intelligent 
Knowledge-Based Systems.  The recommended funding levels being 
approximately: $100M, $145M, $66M and $40M respectively.

The British Government's response was entirely positive and resulted
in the setting up of a small Directorate within the Department of
Industry.  This is staffed by people from industry and supported by
the Government.

The most obvious results so far have been the creation of several 
Information Technology posts in various universities.  Whether the 
research money will appear as quickly remains to be seen.

Richard.

------------------------------

Date: Mon 12 Sep 83 22:35:21-PDT
From: Edward Feigenbaum <FEIGENBAUM@SUMEX-AIM>
Subject: The world turns; would you believe...

                [Reprinted from the SU-SCORE bboard.]

1. A thing called the Wall Street Computer Review, advertising
a conference on computers for Wall Street professionals, with
keynote speech by Isaac Asimov entitled "Artificial Intelligence
on Wall Street"

2. In the employment advertising section of last Sunday's NY Times,
Bell Labs (of all places!)  showing Expert Systems prominently
as one of their areas of work and need, and advertising for people
to do Expert Systems development using methods of Artificial
Intelligence research. Now I'm looking for a big IBM ad in
Scientific American...

3. In 2 September SCIENCE, an ad from New Mexico State's Computing
Research Laboratory. It says:

"To enhance further the technological capabilities of New Mexico, the
state has funded five centers of technical excellence including
Computing Research Laboratory (CRL) at New Mexico State University.
...The CRL is dedicated to interdisciplinary research on knowledge-
based systems"

------------------------------

Date: 15 Sep 1983 15:28-EST
From: David.Anderson@CMU-CS-G.ARPA
Subject: Re: Multiple Inheritance query

For a discussion of multiple inheritance see "Multiple Inheritance in 
Smalltalk-80" by Alan Borning and Dan Ingalls in the AAAI-82
proceedings.  The Lisp Machine Lisp manual also has some justification
for multiple inheritance schemes in the chapter on Flavors.

--david

[See also any discussion of the LOOPS language, e.g., in the
Fall issue of AI Magazine.  -- KIL]

------------------------------

Date: Wed 14 Sep 83 19:16:41-EDT
From: Ted Markowitz <TJM@COLUMBIA-20.ARPA>
Subject: Info on micro LISP dialects

Has anyone evaluated verions of LISP that run on micros? I'd like to
find out what's already out there and people's impressions of them.
The hardware would be something in the nature of an IBM PC or a DEC
Rainbow.

--ted

------------------------------

Date: 12 Sep 1983 1415-PDT
From: Ichiki
Subject: Talk by Michael Hess

      [This talk will be given at the SRI AI Center.  Visitors
      should come to E building on Ravenswood Avenue in Menlo
      Park and call Joani Ichiki, x4403.]


                Text Based Question Answering Systems
                -------------------------------------

                             Michael Hess
                     University of Texas, Austin

                  Friday, 16 September, 10:30, EK242

Question Answering Systems typically operate on Data Bases consisting 
of object level facts and rules. This, however, limits their 
usefulness quite substantially. Most scientific information is 
represented as Natural Language texts. These texts provide relatively 
few basic facts but do give detailed explanantions of how they can be 
interpreted, i.e. how the facts can be linked with the general laws 
which either explain them, or which can be inferred from them. This 
type of information, however, does not lend itself to an immediate 
representation on the object level.

Since there are no known proof procedures for higher order logics we 
have to find makeshift solutions for a suitable text representation 
with appropriate interpretation procedures. One way is to use the 
subset of First Order Predicate Calculus as defined by Prolog as a 
representation language, and a General Purpose Planner (implemented in
Prolog) as an interpreter. Answering a question over a textual data 
base can then be reduced to proving the answer in a model of the world
as described in the text, i.e. to planning a sequence of actions 
leading from the state of affairs given in the text to the state of 
affairs given in the question. The meta-level information contained in
the text is used as control information during the proof, i.e. during 
the execution of the simulation in the model. Moreover, the format of 
the data as defined by the planner makes explicit some kinds of 
information particularly often addressed in questions.

The simulation of an experiment in the Blocks World, using the kind of
meta-level information important in real scientific experiments, can 
be used to generate data which, when generalised, could be used 
directly as DB for question answering about the experiment.  
Simultaneously, it serves as a pattern for the representation of 
possible texts describing the experiment.  The question of how to 
translate NL questions and NL texts, into this kind of format, 
however, has yet to be solved.

------------------------------

Date: 12 Sep 1983 1730-PDT
From: Ichiki
Subject: Talk by Peter Borgwardt

      [This talk will be given at the SRI AI Center.  Visitors
      should come to E building on Ravenswood Avenue in Menlo
      Park and call Joani Ichiki, x4403.]

There will be a talk given by Peter Borgwardt on Monday, 9/19 at 
10:30am in Conference Room EJ222.  Abstract follows:

              Parallel Prolog Using Stack Segments
                on Shared-memory Multiprocessors

                         Peter Borgwardt
                   Computer Science Department
                     University of Minnesota
                     Minneapolis, MN 55455

                            Abstract

A method of parallel evaluation for Prolog is presented for 
shared-memory multiprocessors that is a natural extension of the 
current methods of compiling Prolog for sequential execution.  In 
particular, the method exploits stack-based evaluation with stack 
segments spread across several processors to greatly reduce the need
for garbage collection in the distributed computation.  AND 
parallelism and stream parallelism are the most important sources of
concurrent execution in this method; these are implemented using local
process lists; idle processors may scan these and execute any process
as soon as its consumed (input) variables have been defined by the
goals that produce them.  OR parallelism is considered less important
but the method does implement it with process numbers and variable
binding lists when it is requested in the source program.

------------------------------

Date: Wed, 14 Sep 83 07:31 PDT
From: "Glasser Alan"@LLL-MFE.ARPA
Subject: human-nets discussion on AI and architecture

Ken,

   I see you have revived the Human-nets discussion about AI and
computer architecture.  I initiated that discussion and saved all
the replies.  I thought you might be interested.  I'm sending them
to you rather than AILIST so you can use your judgment about what
if anything you might like to forward to AILIST.
                                        Alan

[The following is the original message.  The remainder of this
digest consists of the collected replies.  I am not sure which,
if any, appeared in Human-Nets.  -- KIL]


---------------------------------------------------------------------

Date: 4 Oct 1982 (Monday) 0537-EDT
From: GLASSER at LLL-MFE
Subject: artificial intelligence and computer architecture

     I am a new member of the HUMAN-NETS interest group.  I am also
newly interested in Artificial Intelligence, partly as a result of
reading "Goedel,Escher,Bach" and similar recent books and articles
on AI.  While this interest group isn't really about AI, there isn't
any other group which is, and since this one covers any computer
topics not covered by others, this will do as a forum.
     From what I've read, it seems that most or all AI work now
being done involves using von Neumann computer programs to model
aspects of intelligent behavior.  Meanwhile, others like Backus
(IEEE Spectrum, August 1982, p.22) are challenging the dominance of
von Neumann computers and exploring alternative programming styles
and computer architectures. I believe there's a crucial missing link
in understanding intelligent behavior.  I think it's likely to
involve the nature of associative memory, and I think the key to it
is likely to involve novel concepts in computer architecture.
Discovery of the structure of associative memory could have an
effect on AI similar to that of the discovery of the structure of
DNA on genetics.  Does anyone out there have similar ideas?  Does
anyone know of any research and/or publications on this sort of
thing?

---------------------------------------------------------------------

Date: 15 Oct 1982 1406-PDT
From: Paul Martin <PMARTIN at SRI-AI>
Subject: Re: HUMAN-NETS Digest   V5 #96

Concerning the NON-VON project at Columbia, David Shaw, formerly of
the Stanford A. I. Lab, is using the development of some
non-VonNeuman hardware designs to make an interesting class of
database access operations no longer require times that are
exponential with the size of the db.  He wouldn't call his project
AI, but rather an approach to   "breaking the VonNeuman bottleneck"
as it applies to a number of well-understood but poorly solved
problems in computing.

---------------------------------------------------------------------

Date: 28 Oct 1982 1515-EDT
From: David F. Bacon
Subject: Parallelism and AI
Reply-to: Columbia at CMU-20C

Parallel Architectures for Artificial Intelligence at Columbia

While the NON-VON supercomputer is expected to provide significant
performance improvements in other areas as well, one of the
principal goals of the project is the provision of highly efficient
support for large-scale artificial intelligence applications.  As
Dr. Martin indicated in his recent message, NON-VON is particularly
well suited to the execution of relational algebraic operations.  We
believe, however, that such functions, or operations very much like
them, are central to a wide range of artificial intelligence
applications.

In particular, we are currently developing a parallel version of the
PROLOG language for NON-VON (in addition to parallel versions of
Pascal, LISP and APL).  David Shaw, who is directing the NON-VON
project, wrote his Ph.D.  thesis at the Stanford A.I. Lab on a
subject related to large-scale parallel AI operations.  Many of the
ideas from his dissertation are being exploited in our current work.

The NON-VON machine will be constructed using custom VLSI chips,
connected according to a binary tree-structured topology.  NON-VON
will have a very "fine granularity" (that is, a large number of very
small processors).  A full-scale NON-VON machine might embody on the
order of 1 million processing elements.  A prototype version
incorporating 1000 PE's should be running by next August.

In addition to NON-VON, another machine called DADO is being
developed specifically for AI applications (for example, an optimal
running time algorithm for Production System programs has already
been implemented on a DADO simulator).  Professor Sal Stolfo is
principal architect of the DADO machine, and is working in close
collaboration with Professor Shaw.  The DADO machine will contain a
smaller number of more powerful processing elements than NON-VON,
and will thus have a a "coarser" granularity.  DADO is being
constructed with off-the-shelf Intel 8751 chips; each processor will
have 4K of EPROM and 8K of RAM.

Like NON-VON, the DADO machine will be configured as a binary tree.
Since it is being constructed using "off-the-shelf" components, a
working DADO prototype should be operational at an earlier date than
the first NON-VON machine (a sixteen node prototype should be
operational in three weeks!).  While DADO will be of interest in its
own right, it will also be used to simulate the NON-VON machine,
providing a powerful testbed for the investigation of massive
parallelism.

As some people have legitimately pointed out, parallelism doesn't
magically solve all your problems ("we've got 2 million processors,
so who cares about efficiency?").  On the other hand, a lot of AI
problems simply haven't been practical on conventional machines, and
parallel machines should help in this area.  Existing problems are
also sped up substantially [ O(N) sort, O(1) search, O(n↑2) matrix
multiply ].  As someone already mentioned, vision algorithms seem
particularly well suited to parallelism -- this is being
investigated here at Columbia.

New architectures won't solve all of our problems -- it's painfully
obvious on our current machines that even fast expensive hardware
isn't worth a damn if you haven't got good software to run on it,
but even the best of software is limited by the hardware.  Parallel
machines will overcome one of the major limitations of computers.

David Bacon
NON-VON/DADO Research Group
Columbia University

------------------------------

Date: 7 Nov 82 13:43:44 EST  (Sun)
From: Mark Weiser <mark.umcp-cs@UDel-Relay>
Subject: Re:  Parallelism and AI

Just to mention another project, The CS department at the University
of Maryland has a parallel computing project called Zmob.  A Zmob
consists of 256 Z-80 processors called moblets, each with 64k
memory, connected by a 48 bit wide high speed shift register ring
network  (100ns/shift, 25.6us/revolution) called the "conveyer
belt".  The conveyer belt acts almost like a 256x256 cross-bar since
it rotates faster than a z-80 can do significant I/O, and it also
provides for broadcast messages and messages sent and received by
pattern match.  Each Z-80 has serial and parallel ports, and the
whole thing is served by a Vax which provides cross-compiling and
file access.

There are four projects funded and working on Zmob (other than the
basic hardware construction), sponsored by the Air Force.  One is
parallel numerical analysis, matrix calculations, and the like (the
Z-80's have hardware floating point).  The second is parallel image
processing and vision.  The third is distributed problem solving
using Prolog.  The fourth (mine) is operating systems and software,
developing remote-procedure-call and a distributed version of Unix
called Mobix.

A two-moblet prototype was working a year and half ago, and we hope
to bring up a 128 processor version in the next few months.  (The
boards are all PC'ed and stuffed but timing problems on the bus are
temporarily holding things back).

------------------------------

End of AIList Digest
********************

∂16-Sep-83  1714	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #58
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Sep 83  17:12:40 PDT
Date: Friday, September 16, 1983 4:10PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #58
To: AIList@SRI-AI


AIList Digest           Saturday, 17 Sep 1983      Volume 1 : Issue 58

Today's Topics:
  Automatic Translation - Ada,
  Games - Go Programs & Foonberg's Number Problem,
  Artificial Intelligence - Turing Test & Creativity
----------------------------------------------------------------------

Date: 10 Sep 83 13:50:18-PDT (Sat)
From: decvax!wivax!linus!vaxine!wjh12!foxvax1!brunix!rayssd!sdl@Ucb-Vax
Subject: Re: Translation into Ada:  Request for Info
Article-I.D.: rayssd.142

There have been a number of translators from Pascal to Ada, the first
successful one I know of was developed at UC Berkeley by P. Albrecht,
S. Graham et al.  See the "Source-to-Source Translation" paper in the
1980 Proceedings of Sigplan Symp. on Ada, Dec. 1980.

At Univ. S. Calif. Info. Sci. Institute (USC-ISI), Steve Crocker (now
at the Aerospace Corp.) developed AUTOPSY, a translator from CMS-2 to
Ada.  (CMS-2 is the Navy standard language for embedded software.)

Steve Litvintchouk
Raytheon Company
Portsmouth, RI  02871

------------------------------

Date: 10 Sep 83 13:56:17-PDT (Sat)
From: decvax!wivax!linus!vaxine!wjh12!foxvax1!brunix!rayssd!sdl@Ucb-Vax
Subject: Re: Go Tournament
Article-I.D.: rayssd.143

ARE there any available Go programs which run on VAX/UNIX which I
could obtain?  (Either commercially sold, or available from
universities, or whatever.)

I find Go fascinating and would love to have a Go program to play
against.

Please reply via USENET, or to:

Steve Litvintchouk
Raytheon Company
Submarine Signal Division
Portsmouth, RI 02871

(401)847-8000 x4018

------------------------------

Date: 14 Sep 1983 16:18-EDT
From: Dan Hoey <hoey@NRL-AIC>
Subject: Alan Foonberg's number problem

I'm surprised you posted Alan Foonberg's number problem on AIlist
since Vivek Sarkar's solution has already appeared (Prolog digest V1
#28).  I enclose his solution below.  His solution unfortunately omits
the special cases , 2020, and 21200; I have sent a correction to the
Prolog digest.

Dan

------------------------------

Date: Wed 7 Sep 83 11:08:08-PD
From: Vivek Sarkar <JLH.Vivek@SU-SIERRA>
Subject: Solution to Alan Foonberg's Number Puzzle

Here is a general solution to the puzzle posed by Alan Foonberg:

My generalisation is to consider n-digit numbers in base n.  The 
digits can therefore take on values in the range 0 .. n-1 .

A summary of the solution is:

n = 4:  1210

n >= 7:  (n-4) 2 1 0 0 ... 0 0 1 0 0 0
                   <--------->
                    (n-7) 0's

Further these describe ALL possible solutions, I.e. radix values of 
2,3,5,6 have no solutions, and other values have exactly one solution 
for each radix.

Proof:

Case 2 <= n <= 6:  Consider these as singular cases.  It is simple to
show that there are no solutions for 2,3,5,6 and that 1210 is the only
solution for 4. You can do this by writing a program to generate all
solutions for a given radix.  ( I did that; unfortunately it works out
better in Pascal than Prolog ! )

CASE n >= 7:  It is easy to see that the given number is indeed a
solution. ( The rightmost 1 represents the single occurrence of (n-4)
at the beginning ).  For motivation, we can substitute n=10 and get 
6210001000, which was the decimal solution provided by Alan.

The tough part is to show that this represents the only solution, for
a given radix.  We do this by considering all possible values for the
first digit ( call it d0 ) and showing that d0=(n-4) is the only one
which can lead to a solution.

SUBCASE d0 < (n-4):  Let d0 = n-4-j, where j>=1.  Therefore the number
has (n-4-j) 0's, which leaves (j+3) non-zero digits apart from d0.
Further these (j+3) digits must add up to (j+4). ( The sum of the
digits of a solution must be n, as there are n digits in the number,
and the value of each digit contributes to a frequency count of digits
with its positional value).  The only way that (j+3) non-zero digits
can add up to (j+4) is by having (j+2) 1's and one 2.  If there are
(j+2) 1's, then the second digit from the left, which counts the
number of 1's (call it d1) must = (j+2).  Since j >= 1, d1=(j+2) is
neither a 1 nor a 2.  Contradiction !

SUBCASE d0 > (n-4):  This leads to 3 possible values for d0: (n-1),
(n-2) & (n-3).  It is simple to consider each value and see that it
can't possibly lead to a solution, by using an analysis similar to the
one above.

We therefore conclude that d0=(n-4), and it is straightforward to show
that the given solution is the only possible one, for this value of
d0.

-- Q.E.D.

------------------------------

Date: Wed 14 Sep 83 17:25:38-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Re: Alan Foonberg's number problem

Thanks for the note and the correction.  I get the Prolog digest
a little delayed, so I hadn't seen the answer at the time I relayed
the problem.

My purpose in sending out the problem actually had nothing to do with
finding the answer.  The answer you forwarded is a nice mathematical
proof, but the question is whether and how AI techniques could solve
the problem.  Would an AI program have to reason in the same manner as
a mathematician?  Would different AI techniques lead to different
answers?  How does one represent the problem and the solution in 
machine-readable form?  Is this an interesting class of problems for
cognitive science to deal with?

I was expecting that someone would respond with a 10-line PROLOG
program that would solve the problem.  The discussion that followed
might contrast that with the LISP or ALGOL infrastructure needed to
solve the problem.  Now, of course, I don't expect anyone to present
algorithmic solutions.

                                        -- Ken Laws

------------------------------

Date: 9 Sep 83 13:15:56-PDT (Fri)
From: harpo!floyd!cmcl2!csd1!condict @ Ucb-Vax
Subject: Re: in defense of Turing - (nf)
Article-I.D.: csd1.116

A comment on the statement that it is easy to trip up an allegedly
intelligent machine that generates responses by using the input as an
index into an array of possible outputs:  Yes, but this machine has no
state and hence hardly qualifies as a machine at all!  The simple
tricks you described cannot be used if we augment it to use the entire
sequence of inputs so far as the index, instead of just the most
recent one, when generating its response. This allows it to take into
account sequences that contain runs of identical inputs and to 
understand inputs that refer to previous inputs (or even
Hofstadteresque self-referential inputs).  My point is not that this
new machine cannot be tripped up but that the one described is such a
straw man that fooling it gives no information about the real
difficulty of programming a computer to pass the Turing test.

------------------------------

Date: 10 Sep 83 22:20:39-PDT (Sat)
From: decvax!wivax!linus!philabs!seismo!rlgvax!cvl!umcp-cs!speaker@Ucb-Vax
Subject: Re: in defense of Turing
Article-I.D.: umcp-cs.2538

It should be fairly obvious that the Turing test is not a precise
test to determine intelligence because the very meaning of the
word 'intellegence' cannot be precisely pinned down, despite what
your Oxford dictionary might say.

I think the idea here is that if a machine can perform such that
it is indistinguishable from the behavior of a human then it can
be said to display human intelligence.  Note that I said, "human
intelligence."

It is even debatable whether certain members of the executive branch
can be said to be intelligent.  If we can't apply the Turing test
there... then surely we're just spinning our wheels in an attempt
to apply it universally.

                                                - Speaker

--
Full-Name:      Speaker-To-Animals
Csnet:          speaker@umcp-cs
Arpa:           speaker.umcp-cs@UDel-Relay

This must be hell...all I can see are flames... towering flames!

------------------------------

Date: Wed 14 Sep 83 12:35:11-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: intelligence and genius

[This continues a discussion on Human-Nets.  My original statement, 
printed below, was shot down by several people.  Individuals certainly
derive satisfaction from hobbies at which they will never excel.  It 
would take much of the fun out of my life, however, if I could not
even imagine excelling at anything because cybernetic life had
surpassed humans in every way. -- KIL]

    From: Ken Laws <Laws@SRI-AI.ARPA>
    Life will get even worse if AI succeeds in automating true
    creativity.  What point would there be in learning to paint,
    write, etc., if your home computer could knock out more
    artistic creations than you could ever hope to master?

    I was rather surprised that this suggestion was taken so quickly
as it stands. Most people in AI believe that we will someday create an
"intelligent" machine, but Ken's claim seems to go beyond that;
"automating true creativity" seems to be saying that we can create not
just intelligent, but "genius" systems, at will. The automation of
genius is a more sticky claim in my mind.

    For example, if we create an intelligent system, do we make it a
genius system by just turning up the speed or increasing its memory?
That"s like saying a painter could become Rembrandt if he/she just
painted 1000 times more. More likely is that the wrong (or uncreative)
ideas would simply pour out faster, or be remembered longer. Turning
up the speed of the early blind-search chess programs made them
marginally better players, but no more creative.

    Or let's say we stumble onto the creation of some genius system,
call it "Einstein". Do we get all of the new genius systems we need by
merely duplicating "Einstein", something impossible to do with human
systems? Again, we hit a dead end... "Einstein" will only be useful in
a small domain of creativity, and will never be a Bach or a Rembrandt
no matter how many we clone.  Even more discouraging, if we xerox off
1000 of our "Einstein" systems, do we get 1000 times the creative
ideas? Probably not; we will cover the range of "Einstein's" potential
creativity better, but that's it. Even a genius has only a range of
creativity.

    What is it about genius systems that makes them so intractable?  
If we will someday create intelligent systems consistently and
reliably, what stands in the way of creating genius systems on demand?
I would suggest that statistics get in our way here; that genius
systems cannot be created out of dust, but that every once in a while,
an intelligent system has the proper conditioning and evolves into a
genius system. In this light, the number of genius systems possible
depends on the pool of intelligent systems that are available as
substrate.

    In short, while I feel we will be able to create intelligent 
systems, we will not be able to directly construct superintelligent 
ones. While there will be advantages in duplicating, speeding up, or
otherwise manipulating a genius system once created, the process of
creating one will remain maddeningly elusive.

David Rogers DRogers@SUMEX-AIM.ARPA


[I would like to stake out a middle ground: creative systems.

We will certainly have intelligent systems, and we will certainly have
trouble devising genius systems.  (Genius in human terms: I don't want
to get into whether an AI program can be >>sui generis<< if we can
produce a thousand variations of it before breakfast.)  A [scientific]
genius is someone who develops an idea for which there is, or at least
seems to be, no precedent.

Creativity, however, can exist in a lesser being.  Forget Picasso,
just consider an ordinary artist who sees a new style of bold,
imaginative painting.  The artist has certain inborn or learned
measures of artistic merit: color harmony, representational accuracy,
vividness, brush technique, etc.  He evaluates the new painting and
finds that it exists in a part of his artistic "parameter space" that
he has never explored.  He is excited, and carefully studies the
painting for clues as to the techniques that were used.  He
hypothesizes rules for creating similar visual effects, trys them out,
modifies them, iterates, adds additional constraints (yes, but can I
do it with just rectangles ...), etc.  This is creativity.  Nothing
that I have said above precludes our artist from being a machine.

Another example, which I believe I heard from a recent Stanford Ph.D.
(sorry, can't remember who): consider Solomon's famous decision.
Everyone knows that a dispute over property can often be settled by
dividing the property, providing that the value of the property is not
destroyed by the act of division.  Solomon's creative decision
involved the realization (at least, we hope he realized it) that in a
particular case, if the rule was implemented in a particular
theatrical manner, the precondition could be ignored and the rule
would still achieve its goal.  We can then imagine Solomon to be a
rule-based system with a metasystem that is constantly checking for
generalizations, specializations, and heuristic shortcuts to the
normal rule sequences.  I think that Doug Lenat's EURISKO program has
something of this flavor, as do other learning programs.

In the limit, we can imagine a system with nearly infinite computing 
power that builds models of its environment in its memory.  It carries
out experiments on this model, and verifies the experiments by
carrying them out in the real world when it can.  It can solve
ordinary problems through various applicable rule invocations,
unifications, planning, etc.  Problems requiring creativity can often
be solved by applying inappropriate rules and techniques (i.e.,
violating their preconditions) just to see what will happen --
sometimes it will turn out that the preconditions were unnecessarily
strict.  [The system I have just described is a fair approximation to
a human -- or even to a monkey, dog, or elephant.]

True genius in such a system would require that it construct new 
paradigms of thought and problem solving.  This will be much more 
difficult, but I don't doubt that we and our cybernetic offspring will
even be able to construct such progeny someday.

                                        -- Ken Laws ]

------------------------------

End of AIList Digest
********************

∂19-Sep-83  1751	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #59
Received: from SRI-AI by SU-AI with TCP/SMTP; 19 Sep 83  17:48:20 PDT
Date: Monday, September 19, 1983 4:16PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #59
To: AIList@SRI-AI


AIList Digest            Tuesday, 20 Sep 1983      Volume 1 : Issue 59

Today's Topics:
  Programming Languages -  Micro LISP Reviews,
  Machine Translation - Ada & Dictionary Request & Grammar Translation,
  AI Journals - Addendum,
  Bibliography - SNePS Research Group
----------------------------------------------------------------------

Date: Mon, 19 Sep 1983  11:41 EDT
From: WELD%MIT-OZ@MIT-MC
Subject: Micro LISPs

For a survey of micro LISPs see the August and Sept issues of 
Microsystems magazine. The Aug issue reviews muLISP, Supersoft LISP 
and The Stiff Upper Lisp. I believe that the Sept issue will continue 
the survey with some more reviews.

Dan

------------------------------

Date: 14 Sep 83 1:44:58-PDT (Wed)
From: decvax!genrad!mit-eddie!barmar @ Ucb-Vax
Subject: Re: Translation into Ada:  Request for Info
Article-I.D.: mit-eddi.713

I think the reference to the WWMCS conversion effort is a bad example 
when talking aboutomatic programming language translation.  I would be
very surprised if WWMCS is written in a high-level language.  It runs
on Honeywell GCOS machines, I believe, and I think that GCOS system 
programming is traditionally done in GMAP (GCOS Macro Assembler 
Program), especially at the time that WWMCS was written.  Only a 
masochist would even think of writing an automatic "anticompiler" (I 
have heard of uncompilers, but those are usually restricted to
figuring out the code produced by a known compiler, not arbitrary
human coding); researchers have found it hard enough to teach
computers to "understand" programs in HLLs, and it is often pretty
difficult for humans to understand others' assembler code.
--
                        Barry Margolin
                        ARPA: barmar@MIT-Multics
                        UUCP: ..!genrad!mit-eddie!barmar

------------------------------

Date: Mon 19 Sep 83 14:56:49-CDT
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: Request for m/c-readable foreign language dictionary info

I am looking for foreign-language dictionaries in machine-readable
form.  Of particular interest would be a subset containing
EDP-terminology.  This would be used to help automate translation of
computer-related technical materials.

Of major interest are German, Spanish, French, but others might be
useful also.

Any pointers appreciated.

Werner (UUCP:  ut-ngp!werner or ut-ngp!utastro!werner
         via:  { decvax!eagle , ucbvax!nbires , gatech!allegra!eagle ,
                 ihnp4 }
        ARPA: werner@utexas-20 or werner@utexas-11 )

------------------------------

Date: 19 Sep 1983 0858-PDT
From: PAZZANI at USC-ECL
Subject: Parsifal

I have a question about PARSIFAL (Marcus's deterministic parser) that
I hope someone can answer:

Is it easy (or possible) to convert grammar rules to the kind of rules 
that Parsifal uses? Is there an algoritm to do so? 
(i.e., by grammar rule, I mean things like:
S -> NP VP
VP -> VP2 NP PP
VP -> V3 INF
INF -> to VP
etc.
where by grammar rule Marcus means things like
{RULE MAJOR-DECL-S in SS-START
[=np][=verb]-->
Label c decl,major.
Deactivate ss-start. Activate parse-subj.}

{RULE UNMARKED-ORDER IN PARSE-SUBJ
[=np][=verb]-->
Attach 1st to c as np.
Deactivate Parse-subj. Activate parse-aux.}

Thanks in advance,
Mike Pazzani
Pazzani@usc-ecl

------------------------------

Date: 16 Sep 83 16:58:30-PDT (Fri)
From: ihnp4!cbosgd!cbscc!cbscd5!lvc @ Ucb-Vax
Subject: addendum to AI journal list
Article-I.D.: cbscd5.589

The following are journals that readers have sent me since the time I 
posted the list of AI journals.  As has been pointed out, individuals
can get subscriptions at a reduced rate.  Most of the prices I quoted
were the institutional price.

The American Journal of Computational Linguistics -- will now be called ->
Computational Linguistics
        Subscription $15
        Don Walker, ACL
        SRI International
        Menlo Park, CA 94025.
------------------------------
Cognition and Brain Theory
        Lawrence Erlbaum Associates, Inc.
        365 Broadway,
        Hillsdale, New Jersey 07642
        $18 Individual $50 Institutional
        Quarterly
        Basic cognition, proposed models and discussion of
        consciousness and mental process, epistemology - from frames to
        neurons, as related to human cognitive processes. A "fringe"
        publication for AI topics, and a good forum for issues in cognitive
        science/psychology.
------------------------------
New Generation Computing
        Springer-Verlag New York Inc.
        Journal Fulfillment Dept.
        44 Hartz Way
        Secaucus, NJ 07094
        A quarterly English-language journal devoted to international
        research on the fifth generation computer.  [It seems to be
        very strong on hardware and logic programming.]
        1983 - 2 issues - $52. (Sample copy free.)
        1984 - 4 issues - $104.

Larry Cipriani
cbosgd!cbscd5!lvc

------------------------------

Date: 16 Sep 1983 10:38:57-PDT
From: shapiro%buffalo-cs@UDel-Relay
Subject: Your request for bibliographies


                           Bibliography
                 SNeRG: The SNePS Research Group
                  Department of Computer Science
             State University of New York at Buffalo
                     Amherst, New York 14226



     Copies of Departmental Technical Reports (marked with an "*")
should be requested from The Library Committee, Dept. of Computer
Science, SUNY/Buffalo, 4226 Ridge Lea Road, Amherst, NY 14226.
Businesses are asked to enclose $3.00 per report requested with their
requests. Others are asked to enclose $1.00 per report.

     Copies of papers other than Departmental Technical Reports may be
requested directly from Prof. Stuart C. Shapiro at the above address.


 1.  Shapiro, S. C. [1971] A net structure for semantic
     information storage, deduction and retrieval. Proc. Second
     International Joint Conference on Artificial Intelligence,
     William Kaufman, Los Altos, CA, 212-223.

 2.  Shapiro, S. C. [1972] Generation as parsing from a network
     into a linear string. American Journal of Computational
     Linguistics, Microfiche 33, 42-62.

 3.  Shapiro, S. C. [1976] An introduction to SNePS (Semantic Net
     Processing System). Technical Report No. 31, Computer
     Science Department, Indiana University, Bloomington, IN, 21
     pp.

 4.  Shapiro, S. C. and Wand, M. [1976] The Relevance of
     Relevance. Technical Report No. 46, Computer Science
     Department, Indiana University, Bloomington, IN, 21pp.

 2.  Bechtel, R. and Shapiro, S. C. [1976] A logic for semantic
     networks. Technical Report No. 47, Computer Science
     Department, Indiana University, Bloomington, IN, 29pp.

 6.  Shapiro, S. C. [1977] Representing and locating deduction
     rules in a semantic network. Proc. Workshop on
     Pattern-Directed Inference Systems. SIGART Newsletter, 63
     14-18.

 7.  Shapiro, S. C. [1977] Representing numbers in semantic
     networks: prolegomena Proc. 2th International Joint
     Conference on Artificial Intelligence, William Kaufman, Los
     Altos, CA, 284.

 8.  Shapiro, S. C. [1977] Compiling deduction rules from a
     semantic network into a set of processes. Abstracts of
     Workshop on Automatic Deduction, MIT, Cambridge, MA.
     (Abstract only), 7pp.

 9.  Shapiro, S. C. [1978] Path-based and node-based inference in
     semantic networks. In D. Waltz, ed. TINLAP-2: Theoretical
     Issues in Natural Languages Processing. ACM, New York,
     219-222.

10.  Shapiro, S. C. [1979] The SNePS semantic network processing
     system. In N. V. Findler, ed. Associative Networks: The
     Representation and Use of Knowledge by Computers. Academic
     Press, New York, 179-203.

11.  Shapiro, S. C. [1979] Generalized augmented transition
     network grammars for generation from semantic networks.
     Proc. 17th Annual Meeting of the Association for
     Computational Linguistics. University of California at San
     Diego, 22-29.

12.  Shapiro, S. C. [1979] Numerical quantifiers and their use in
     reasoning with negative information. Proc. Sixth
     International Joint Conference on Artificial Intelligence,
     William Kaufman, Los Altos, CA, 791-796.

13.  Shapiro, S. C. [1979] Using non-standard connectives and
     quantifiers for representing deduction rules in a semantic
     network. Invited paper presented at Current Aspects of AI
     Research, a seminar held at the Electrotechnical Laboratory,
     Tokyo, 22pp.

14.  * McKay, D. P. and Shapiro, S. C. [1980] MULTI: A LISP Based
     Multiprocessing System. Technical Report No. 164, Department
     of Computer Science, SUNY at Buffalo, Amherst, NY, 20pp.
     (Contains appendices not in LISP conference version)

12.  McKay, D. P. and Shapiro, S. C. [1980] MULTI - A LISP based
     multiprocessing system. Proc. 1980 LISP Conference, Stanford
     University, Stanford, CA, 29-37.

16.  Shapiro, S. C. and McKay, D. P. [1980] Inference with
     recursive rules. Proc. First Annual National Conference on
     Artificial Intelligence, William Kaufman, Los Altos, CA,
     121-123.

17.  Shapiro, S. C. [1980] Review of Fahlman, Scott. NETL: A
     System for Representing and Using Real-World Knowledge. MIT
     Press, Cambridge, MA, 1979. American Journal of
     Computational Linguistics 6, 3, 183-186.

18.  McKay, D. P. [1980] Recursive Rules - An Outside Challenge.
     SNeRG Technical Note No. 1, Department of Computer Science,
     SUNY at Buffalo, Amherst, NY, 11pp.

19.  * Maida, A. S. and Shapiro, S. C. [1981] Intensional
     concepts in propositional semantic networks. Technical
     Report No. 171, Department of Computer Science, SUNY at
     Buffalo, Amherst, NY, 69pp.

20.  * Shapiro, S. C. [1981] COCCI: a deductive semantic network
     program for solving microbiology unknowns. Technical Report
     No. 173, Department of Computer Science, SUNY at Buffalo,
     Amherst, NY, 24pp.

21.  * Martins, J.; McKay, D. P.; and Shapiro, S. C. [1981]
     Bi-directional Inference. Technical Report No. 174,
     Department of Computer Science, SUNY at Buffalo, Amherst,
     NY, 32pp.

22.  * Martins, J., and Shapiro, S. C. [1981] A Belief Revision
     System Based on Relevance Logic and Heterarchical Contexts.
     Technical Report No. 172, Department of Computer Science,
     SUNY at Buffalo, Amherst, NY, 42pp.

23.  Shapiro, S. C. [1981] Summary of Scientific Progress. SNeRG
     Technical Note No. 3, Department of Computer Science, SUNY
     at Buffalo, Amherst, NY, 2pp.

24.  Mckay, D. P. and Martins, J. SNePSLOG User's Manual. SNeRG
     Technical Note No. 4, Department of Computer Science, SUNY
     at Buffalo, Amherst, NY, 8pp.

22.  McKay, D. P.; Shubin, H.; and Martins, J. [1981] RIPOFF:
     Another Text Formatting Program. SNeRG Technical Note No. 2,
     Department of Computer Science, SUNY at Buffalo, Amherst,
     NY, 18pp.

26.  * Neal, J. [1981] A Knowledge Engineering Approach to
     Natural Language Understanding. Technical Report No. 179,
     Computer Science Department, SUNY at Buffalo, Amherst, NY,
     67pp.

27.  * Srihari, R. [1981] Combining Path-based and Node-based
     Reasoning in SNePS. Technical Report No. 183, Department of
     Computer Science, SUNY at Buffalo, Amherst, NY, 22pp.

28.  McKay, D. P.; Martins, J.; Morgado, E.; Almeida, M.; and
     Shapiro, S. C. [1981] An Assessment of SNePS for the Navy
     Domain. SNeRG Technical Note No. 6, Department of Computer
     Science, SUNY at Buffalo, Amherst, NY, 48pp.

29.  Shapiro, S. C. [1981] What do Semantic Network Nodes
     Represent? SNeRG Technical Note No. 7, Department of
     Computer Science, SUNY at Buffalo, Amherst, NY, 12pp.
     Presented at the workshop on Foundational Threads in Natural
     Language Processing, SUNY at Stony Brook.

30.  McKay, D. P., and Shapiro, S. C. [1981] Using active
     connection graphs for reasoning with recursive rules.
     Proceedings of the Seventh International Joint Conference on
     Artificial Intelligence, William Kaufman, Los Altos, CA,
     368-374.

31.  Shapiro, S. C. and The SNePS Implementation Group [1981]
     SNePS User's Manual. Department of Computer Science, SUNY at
     Buffalo, Amherst, NY, 44pp.

32.  Shapiro, S. C.; McKay, D. P.; Martins, J.; and Morgado, E.
     [1981] SNePSLOG: A "Higher Order" Logic Programming
     Language. SNeRG Technical Note No. 8, Department of Computer
     Science, SUNY at Buffalo, Amherst, NY, 16pp. Presented at
     the Workshop on Logic Programming for Intelligent Systems,
     R.M.S. Queen Mary, Long Beach, CA.

33.  * Shubin, H. [1981] Inference and Control in Multiprocessing
     Environments. Technical Report No. 186, Department of
     Computer Science, SUNY at Buffalo, Amherst, NY, 26pp.

34.  Shapiro, S. C. [1982] Generalized Augmented Transition
     Network Grammars for Generation from Semantic Networks. The
     American Journal of Computational Linguistics 8, 1 (January
     - March), 12-22.

32.  Almeida, M.J. [1982] NETP2 - A Parser for a Subset of
     English. SNERG Technical Note No. 9, Department of Computer
     Science, SUNY at Buffalo, Amherst, NY, 32pp.

36.  * Tranchell, L.M. [1982] A SNePS Implementation of KL-ONE,
     Technical Report No. 198, Department of Computer Science,
     SUNY at Buffalo, Amherst, NY, 21pp.

37.  Shapiro, S.C. and Neal, J.G. [1982] A Knowledge engineering
     Approach to Natural language understanding. Proceedings of
     the 20th Annual Meeting of the Association for Computational
     Linguistics, ACL, Menlo Park, CA, 136-144.

38.  Donlon, G. [1982] Using Resource Limited Inference in SNePS.
     SNeRG Technical Note No. 10, Department of Computer Science,
     SUNY at Buffalo, Amherst, NY, 10pp.

39.  Nutter, J. T. [1982] Defaults revisited or "Tell me if
     you're guessing". Proceedings of the Fourth Annual
     Conference of the Cognitive Science Society, Ann Arbor, MI,
     67-69.

40.  Shapiro, S. C.; Martins, J.; and McKay, D. [1982]
     Bi-directional inference. Proceedings of the Fourth Annual
     Meeting of the Cognitive Science Society, Ann Arbor, MI,
     90-93.

41.  Maida, A. S. and Shapiro, S. C. [1982] Intensional concepts
     in propositional semantic networks. Cognitive Science 6, 4
     (October-December), 291-330.

42.  Martins, J. P. [1983] Belief revision in MBR. Proceedings of
     the 1983 Conference on Artificial Intelligence, Rochester,
     MI.

43.  Nutter, J. T. [1983] What else is wrong with non-monotonic
     logics?: representational and informational shortcomings.
     Proceedings of the Fifth Annual Meeting of the Cognitive
     Science Society, Rochester, NY.

44.  Almeida, M. J. and Shapiro, S. C. [1983] Reasoning about the
     temporal structure of narrative texts. Proceedings of the
     Fifth Annual Meeting of the Cognitive Science Society,
     Rochester, NY.

42.  * Martins, J. P. [1983] Reasoning in Multiple Belief Spaces.
     Ph.D. Dissertation, Technical Report No. 203, Computer
     Science Department, SUNY at Buffalo, Amherst, NY, 381 pp.

46.  Martins, J. P. and Shapiro, S. C. [1983] Reasoning in
     multiple belief spaces. Proceedings of the Eighth
     International Joint Conference on Artificial Intelligence,
     William Kaufman, Los Altos, CA, 370-373.

47.  Nutter, J. T. [1983] Default reasoning using monotonic
     logic: a modest proposal. Proceedings of The National
     Conference on Artificial Intelligence, William Kaufman, Los
     Altos, CA, 297-300.

------------------------------

End of AIList Digest
********************

∂20-Sep-83  1121	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #60
Received: from SRI-AI by SU-AI with TCP/SMTP; 20 Sep 83  11:19:24 PDT
Date: Tuesday, September 20, 1983 9:41AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #60
To: AIList@SRI-AI


AIList Digest            Tuesday, 20 Sep 1983      Volume 1 : Issue 60

Today's Topics:
  AI Journals - AI Journal Changes,
  Applications - Cloud Data & AI and Music,
  Games - Go Tournament,
  Intelligence - Turing test & Definitions
----------------------------------------------------------------------

Date: Mon, 19 Sep 83 18:51 PDT
From: Bobrow.PA@PARC-MAXC.ARPA
Subject: News about the Artificial Intelligence Journal


Changes in the Artificial Intelligence Journal

Daniel G. Bobrow (Editor-in-chief)

There have been a number of changes in the Artificial Intelligence
Journal which are of interest to the AI community.

1) The size of the journal is increasing.  In 1982, the journal was
published in two volumes of three issues each (about 650 printed
pages per year).  In 1983, we increased the size to two volumes of
four issues each (about 900 printed pages per year).  In order to
accomodate the increasing number of high quality papers that are
being submitted to the journal, in 1984 the journal will be published
in three volumes of three issues each (about 1000 printed pages per
year).

2) Despite the journal size increase, North Holland will maintain the
current price of $50 per year for personal subscriptions for
individual (non-institutuional) members of major AI organizations
(e.g. AAAI, SIGART).  To obtain such a subscription, members of such
organizations should send a copy of their membership acknowledgement,
and their check for $50 (made out to Artificial Intelligence) to:
        Elsevier Science Publishers
        Attn: John Tagler
        52 Vanderbilt Avenue
        New York, New York 10017
North Holland (Elsevier) will acknowledge receipt of the request for
subscription, provide information about which issues will be included
in your subscription, and when they should arrive.  Back issues are
not available at the personal rate.

3) The AIJ editorial board has recognized the need for good review
articles in subfields of AI.  To encourage the writing of such
articles, an honorarium of $1000 will be awarded the authors of any
review accepted by the journal.  Although review papers will go
through the usual review process, when accepted they will be given
priority in the publication queue.  Potential authors are reminded
that review articles are among the most cited articles in any field.

4) The publication process takes time.  To keep an even flow of
papers in the journal, we must maintain a queue of articles of about
six months.  To allow people to know about important research results
before articles have been published, we will lists of papers accepted
for publication in earlier issues of the journal, and make such lists
available to other magazines (e.g. AAAI magazine, SIGART news).

5) New book review editor: Mark Stefik has taken the job of book
review editor for the Artificial Intelligence Journal.  The following
note from Mark describes his plans to make the book review section
much more active than it has been in the past.

                    ------------------

The Book Review Section of the Artificial Intelligence Journal

Mark Stefik - Book Review Editor

I am delighted for this opportunity to start an active review column
for AI, and invite your suggestions and participation.

        This is an especially good time to review work in artificial
intelligence.  Not only is there a surge of interest in AI, but there
are also many new results and publications in computer science, in
the cognitive sciences and in other related sciences.  Many new
projects are just beginning and finding new directions (e.g., machine
learning, computational linguistics), new areas of work are opening
up (e.g., new architectures), and others are reporting on long term
projects that are maturing (computer vision).  Some readers will want
to track progress in specialized areas; others will find inspiration
and direction from work breaking outside the field.  There is enough
new and good but unreviewed work that I would like to include two or
three book reviews in every issue of Artificial Intelligence.

        I would like this column of book reviews to become essential
reading for the scientific audience of this journal.  My goal is to
cover both scientific works and textbooks.  Reviews of scientific
work will not only provide an abstract of the material, but also show
how it fits into the body of existing work.  Reviews of textbooks
will discuss not only clarity and scope, but also how well the
textbook serves for teaching.  For controversial work of major
interest I will seek more than one reviewer.

        To get things started, I am seeking two things from the
community now.  First, suggestions of books for review.  Books
written in the past five years or so will be considered.  The scope
of the fields considered will be broad.  The main criteria will be
scientific interest to the readership.  For example, books from as
far afield as cultural anthropology or sociobiology will be
considered if they are sufficiently relevent, and readable by an AI
audience.  Occasionally, important books intended for a popular
audience will also be considered.

        My second request is for reviewers.  I will be asking
colleagues for reviews of particular books, but will also be open
both to volunteers and suggestions.  Although I will tend to solicit
reviews from researchers of breadth and maturity, I recognize that
graduate students preparing theses are some of the best read people
in specialized areas.  For them, reviews in Artificial Intelligence
will be a good way to to share the fruits of intensive reading in
thesis preparation, and also to achieve some visibility.  Reviewers
will receive a personal copy of the book reviewed.

        Suggestions will reach me at the following address.
Publishers should send two copies of works to be reviewed.


Mark Stefik
Knowledge Systems Area
Xerox Palo Alto Research Center
3333 Coyote Hill Road
Palo Alto, California  94304

ARPANET Address:  STEFIK@PARC

------------------------------

Date: Mon, 19 Sep 83 17:09:09 PDT
From: Alex Pang <v.pang@UCLA-LOCUS>
Subject: help on satellite image processing


        I'm planning to do some work on cloud formation prediction 
based either purely on previous cloud formations or together with some
other information - e.g. pressure, humidity, wind, etc.  Does anyone
out there know of any existing system doing any related stuff on this,
and if so, how and where I can get more information on it. Also, do
any of you know where I can get satellite data with 3D cloud
information?
        Thank you very much.

                                        alex pang

------------------------------

Date: 16 Sep 83 22:26:21 EDT  (Fri)
From: Randy Trigg <randy%umcp-cs@UDel-Relay>
Subject: AI and music

Speaking of creativity and such, I've had an interest in AI and music
for some time.  What I'd like is any pointers to companies and/or
universities doing work in such areas as cognitive aspects of
appreciating and creating music, automated music analysis and 
synthesis, and "smart" aids for composers and students.

Assuming a reasonable response, I'll post results to the AIList.  
Thanks in advance.

Randy Trigg
...!seismo!umcp-cs!randy (Usenet)
randy.umcp-cs@udel-relay (Arpanet)

------------------------------

Date: 17 Sep 83 23:51:40-PDT (Sat)
From: harpo!utah-cs!utah-gr!thomas @ Ucb-Vax
Subject: Re: Go Tournament
Article-I.D.: utah-gr.908

I'm sure we could find some time on one of our Vaxen for a Go
tournament.  If you're writing it on some other machine, make sure it
is portable.

=Spencer

------------------------------

Date: Fri 16 Sep 83 20:07:31-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Turing test

It was once playfully proposed to permute the actors in the classical 
definition of the Turing test, and thus define an intelligent entity
as one that can tell the difference between a human and a (deceptively
programmed) computer.  May have been prompted by the well-known
incident involving Eliza.  The result is that, as our AI systems get
better, the standard for intelligence will increase.  This definition
may even enable some latter-day Goedel to prove mathematically that
computers can never be intelligent!

                                - Richard :-)

------------------------------

Date: Fri, 16 Sep 83 19:36:53 PDT
From: harry at lbl-nmm
Subject: Psychology and Artificial Intelligence.

Members of this list might find it interesting to read an article ``In
Search of Unicorns'' by M. A. Boden (author of ``Artificial
Intelligence and Natural Man'') in The Sciences (published by the New
York Academy of Sciences).  It discusses the `computational style' in 
theoretical psychology.  It is not a technical article.

                                        Harry Weeks

------------------------------

Date: 15 Sep 83 17:10:04-PDT (Thu)
From: ihnp4!arizona!robert @ Ucb-Vax
Subject: Another Definition of Intelligence
Article-I.D.: arizona.4675


     A problem that bothers me about the Turing test is having to
provoke the machine with such specific questioning.  So jumping ahead
a couple of steps, I would accept a machine as an adequate
intelligence if it could listen to a conversation between other
intelligences, and be able to interject at appropriate points such
that these others would not be able to infer the mechanical aspect of
this new source.  Our experiences with human intelligence would make
us very suspicous of anyone or anything that sits quietly without new,
original, or synthetic comments while being within a environment of
discussion.

     And then to fully qualify, upon overhearing these discussions
over net, I'd expect it to start conjecturing on the question of
intelligence, produce its own definition, and then start sending out
any feelers to ascertain if there is anything out there qualifying
under its definition.

------------------------------

Date: 16 Sep 83 23:11:08-PDT (Fri)
From: decvax!linus!philabs!seismo!rlgvax!cvl!umcp-cs!speaker @ Ucb-Vax
Subject: Re: Another Definition of Intelligence
Article-I.D.: umcp-cs.2608

Finally, someone has come up with a fresh point of view in an
otherwise stale discussion!

Arizona!robert suggests that a machine could be classified as 
intelligent if it can discern intelligence within its environment, as
opposed to being prodded into displaying intelligence.  But how can we
tell if the machine really has a discerning mind?  Does it get
involved in an interesting conversation and respond with its own
ideas?  Perhaps it just sits back and says nothing, considering the
conversation too trivial to participate in.

And therein lies the problem with this idea.  What if the machine 
doesn't feel compelled to interact with its environment?  Is this a
sign of inability, or disinterest?  Possibly disinterest.  A machine
mind might not be interested in its environment, but in its own
thoughts.  Its own thoughts ARE its environment.  Perhaps its a sign
of some mental aberration.  I'm sure that sufficiently intelligent
machines will be able to develop all sorts of wonderfully neurotic
patterns of behavior.

I know.  Let's build a machine with only a console for an output
device and wait for it to say, "Hey, anybody intelligent out there?"
"You got any VAXEN out there?"

                                                - Speaker
-- Full-Name:  Speaker-To-Animals
       Csnet:  speaker@umcp-cs
       Arpa:   speaker.umcp-cs@UDel-Relay

------------------------------

Date: 17 Sep 83 19:17:21-PDT (Sat)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!speaker @ Ucb-Vax
Subject: Life, don't talk to me about life....
Article-I.D.: umcp-cs.2628

        From:  jpj@mss
        Subject:  Re: Another Definition of Intelligence
        To:  citcsv!seismo!rlgvax!cvl!umcp-cs!speaker

        I find your notion of an artificial intelligence sitting
        back, taking in all that goes on around it, but not being
        motivated to comment (perhaps due to boredom) an amusing
        idea.  Have you read "The Restaurant at the End of the
        Universe?"  In that story is a most entertaining ai - a
        chronically depressed robot (whos name escapes me at the
        moment - I don't have my copy at hand) who thinks so much
        faster than all the mortals around it that it is always
        bored and *feels* unappreciated.  (Sounds like some of my
        students!)

Ah yes, Marvin the paranoid android.  "Here I am, brain the size of a
planet and all they want me to do is pick up a peice of paper."

This is really interesting.  You might think that a robot with such a
huge intellect would also develop an oversized ego...  but just the
reverse could be true.  He thinks so fast and so well that he becomes
bored and disgusted with everything around himself... so he withdraws
and wishes his boredom and misery would end.

I doubt Adams had this in mind when he wrote the book, but it fits
together nicely anyway.
--
                                        - Speaker
                                        speaker@umcp-cs
                                        speaker.umcp-cs@UDel-Relay

------------------------------

End of AIList Digest
********************

∂22-Sep-83  1847	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #61
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Sep 83  18:47:28 PDT
Date: Thursday, September 22, 1983 5:15PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #61
To: AIList@SRI-AI


AIList Digest            Friday, 23 Sep 1983       Volume 1 : Issue 61

Today's Topics:
  AI Applications - Music,
  AI at Edinburgh - Request,
  Games - Prolog Puzzle Solution,
  Seminars - Talkware & Hofstader,
  Architectures - Parallelism,
  Technical Reports - Rutgers
----------------------------------------------------------------------

Date: 20 Sep 1983 2120-PDT
From: FC01@USC-ECL
Subject: Re: Music in AI

Music in AI - find Art Wink formerly of U. of Pgh. Dept of info sci.
He had a real nice program to imitate Debuse (experts could not tell
its compositions from originals).

------------------------------

Date: 18 Sep 83 12:01:27-PDT (Sun)
From: decvax!dartvax!lorien @ Ucb-Vax
Subject: U of Edinburgh, Scotland Inquiry
Article-I.D.: dartvax.224


Who knows anything about the current status of the Artificial
Intelligence school at the University of Edinburgh?  I've heard
they've been through hard times in recent years, what with the
Lighthill report and British funding shakeups, but what has been going
on within the past year or so?  I'd appreciate any gossip/rumors/facts
and if anyone knows that they're on the net, their address.

                               --decvax!dartvax!dartlib!lorien
                                 Lorien Y. Pratt

------------------------------

Date: Mon 19 Sep 83 02:25:41-PDT
From: Motoi Suwa <Suwa@Sumex-AIM>
Subject: Puzzle Solution

                 [Reprinted from the Prolog Digest.]

    Date: 14 Sep. 1983
    From: K.Handa  ETL Japan
    Subject: Another Puzzle Solution

This is the solution of Alan's puzzle introduced on 24 Aug.

  ?-go(10).

will display the ten disgit number as following:

  -->6210001000

and

  ?-go(4).

will:

  -->1210
  -->2020

I found following numbers:

  6210001000
   521001000
    42101000
     3211000
       21200
        1210
        2020

The Following is the total program ( DEC10 Prolog Ver.3 )



/*** initial assertion ***/

init(D):- ass←xn(D),assert(rest(D)),!.

ass←xn(0):- !.
ass←xn(D):- D1 is D-1,asserta(x(D1,←)),asserta(n(D1)),ass←xn(D1).

/*** main program ***/

go(D):- init(D),guess(D,0).
go(←):- abolish(x,2),abolish(n,1),abolish(rest,1).

/* guess 'N'th digit */

guess(D,D):- result,!,fail.
guess(D,N):- x(N,X),var(X),!,n(Y),N=<Y,N*Y=<D,ass(N,Y),set(D,N,Y),
           N1 is N+1,guess(D,N1).
guess(D,N):- x(N,X),set(D,N,X),N1 is N+1,guess(D,N1).

/* let 'N'th digit be 'X' */

ass(N,X):- only(retract(x(N,←))),asserta(x(N,X)),only(update(1)).
ass(N,←):- retract(x(N,←)),asserta(x(N,←)),update(-1),!,fail.

only(X):- X,!.

/* 'X' 'N's appear in the sequence of digit */

set(D,N,X):- count(N,Y),rest(Z),!,Y=<X,X=<Y+Z,X1 is X-Y,set1
                                                  (D,N,X1,0).

set1(←,N,0,←):- !.
set1(D,N,X,P):- n(M),P=<M,x(M,Y),var(Y),M*N=<D,ass(M,N),set(D,M,N),
              X1 is X-1,P1 is M,set1(D,N,X1,P1).

/* 'X' is the number of digits which value is 'N' */

count(N,X):- bagof(M,M↑(x(M,Z),nonvar(Z),Z=N),L),length(L,X).
count(←,0).

/* update the number of digits which value is not yet assigned */

update(Z):- only(retract(rest(X))),Z1 is X-Z,assert(rest(Z1)).
update(Z):- retract(rest(X)),Z1 is X+Z,assert(rest(Z1)),!,fail.

/* display the result */

result:- print(-->),n(N),x(N,M),print(M),fail.
result:- nl.

------------------------------

Date: 21 Sep 83  1539 PDT
From: David Wilkins <DEW@SU-AI>
Subject: Talkware Seminars

                [Reprinted from the SU-SCORE bboard.]


           1127 TW Talkware seminar Weds. 2:15

I will be organizing a weekly seminar this fall on a new area I am 
currently developing as a research topic: the theory of "talkware".
This area deals with the design and analysis of languages that are
used in computing, but are not programming languages.  These include
specification languages, representation languages, command languages,
protocols, hardware description languages, data base query languages,
etc.  There is currently a lot of ad hoc but sophisticated practice
for which a more coherent and general framework needs to be developed.
The situation is analogous to the development of principles of
programming languages from the diversity of "coding" languages and
methods that existed in the early fifties.

The seminar will include outside speakers and student presentations of
relevant literature, emphasizing how the technical issues dealt with
in current projects fit into the development talkware theory.  It will
meet at 2:15 every Wednesday in Jacks 301.  The first meeting will be
Wed.  Sept. 28.  For a more extensive description, see
{SCORE}<WINOGRAD>TALKWARE or {SAIL}TALKWA[1,TW].

------------------------------

Date: Thu 22 Sep 00:23
From: Jeff Shrager
Subject: Hofstader seminar at MIT

                 [Reprinted from the CMU-AI bboard.]


Douglas Hofstader is giving a course this semester at MIT.  I thought 
that the abstract would interest some of you.  The first session takes
place today.
                          ------
"Perception, Semanticity, and Statistically Emergent Mentality"
A seminar to be given fall semester by Douglas Hofstader

        In this seminar, I will present my viewpoint about the nature
of mind and the goals of AI.  I will try to explain (and thereby
develop) my vision of how we perceive the essence of things, filtering
out the details and getting at their conceptual core.  I call this
"deep perception", or "recognition".

        We will review some earlier projects that attacked some
related problems, but primarily we will be focussing on my own
research projects, specifically: Seek-Whence (perception of sequential
patterns), Letter Spirit (perception of the style of letters), Jumbo
(reshuffling of parts to make "well-chunked" wholes), and Deep Sea
(analogical perception).  These tightly related projects share a
central philosophy: that cognition (mentality) cannot be programmed
explicitly but must emerge "epiphenomenally", i.e., as a consequence
of the nondeterministic interaction of many independent "subcognitive"
pieces.  Thus the overall "mentality" of such a system is not directly
programmed; rather, it EMERGES as an observable (but onnprogrammed)
phenomenon -- a statistical consequence of many tiny semi-cooperating
(and of course programmed) pieces.  My projects all involve certain
notions under development, such as:

-- "activation level": a measure of the estimated relevance of a given
   Platonic concept at a given time;
-- "happiness": a measure of how easy it is to accomodate a structure
   and its currently accepted Platonic class to each other;
-- "nondeterministic terraced scan": a method of homing in to the best
   category to which to assign something;
-- "semanticity": the measure of how abstractly rooted (intensional) a
   perception is;
-- "slippability": the ease of mutability of intensional
   representational structures into "semantically close" structures;
-- "system temprature": a number measuring how chaotically active the
   whole system is.

        This strategy for AI is permeated by probabilistic or
statistical ideas.  The main idea is that things need not happen in
any fixed order; in fact, that chaos is often the best path to follow
in building up order.  One puts faith in the reliability of
statistics: a sensible, coherent total behavior will emerge when there
are enouh small independent events being influenced by high-level
parameters such as temperature, activation levels, happinesses.  A
challange is to develop ways such a system can watch its own 
activities and use those observations ot evaluate its own progress, to
detect and pull itself out of ruts it chances to fall into, and to
guide itself toward a satisfying outcome.

        ... Prerequisits: an ability to program well, preferably in
Lisp, and an interest in philosophy of mind and artificial
intelligence.

------------------------------

Date: 18 Sep 83 22:48:56-PDT (Sun)
From: decvax!dartvax!lorien @ Ucb-Vax
Subject: Parallelism et. al.
Article-I.D.: dartvax.229

The Parallelism and AI projects at the University of Maryland sound
very interesting.  I agree with an article posted a few days back that
parallel hardware won't necessarily produce any significantly new
methods of computing, as we've been running parallel virtual machines
all along.  Parallel hardware is another milestone along the road to
"thinking in parallel", however, getting away from the purely Von
Neumann thinking that's done in the DP world these days.  It's always
seemed silly to me that our computers are so serial when our brains:
the primary analogy we have for "thinking machines" are so obviously
parallel mechanisms.  Finally we have the technology (software AND
hardware) to follow in our machine architecture cognitive concepts
that evolution has already found most powerful.

I feel that the sector of the Artificial Intelligence community that
pays close attention to psychology and the workings of the human brain
deserves more attention these days, as we move from writing AI
programs that "work" (and don't get me wrong, they work very well!) to
those that have generalizable theoretical basis.  One of these years,
and better sooner than later, we'll make a quantum leap in AI research
and articulate some of the fundamental structures and methods that are
used for thinking.  These may or may not be isomorphic to human
thinking, but in either case we'll do well to look to the human brain
for inspiration.

I'd like to hear more about the work at the University of Maryland; in
particular the prolog and the parallel-vision projects.

What do you think of the debate between what I'll call the Hofstadter 
viewpoint: that we should think long term about the future of
artificial intelligence, and the Feigenbaum credo: that we should stop
philosophizing and build something that works?  (Apologies to you both
if I've misquoted)

                            --Lorien Y. Pratt
                              decvax!dartvax!lorien
                              (Dartmouth College)

------------------------------

Date: 18 Sep 83 23:30:54-PDT (Sun)
From: pur-ee!uiucdcs!uiuccsb!cytron @ Ucb-Vax
Subject: AI and architectures - (nf)
Article-I.D.: uiucdcs.2883


Forward at the request of speaker:  /***** uiuccsb:net.arch /
umcp-cs!speaker / 12:20 am Sep 17, 1983 */

        The fact remains that if we don't have the algorithms for
        doing something with current hardware, we still won't be
        able to do it with faster or more powerful hardware.

The fact remains that if we don't have any algorithms to start with 
then we shouldn't even be talking implementation.  This sounds like a
software engineer's solution anyway, "design the software and then 
find a CPU to run it on."

New architectures, while not providing a direct solution to a lot of
AI problems, provide the test-bed necessary for advanced AI research.
That's why everyone wants to build these "amazingly massive" parallel
architectures.  Without them, AI research could grind to a standstill.

        To some extent these efforts change our way of thinking
        about problems, but for the most part they only speed up
        what we knew how to do already.

Parallel computation is more than just "speeding things up."  Some
problems are better solved concurrently.

        My own belief is that the "missing link" to AI is a lot of
        deep thought and hard work, followed by VLSI implementation
        of algorithms that have (probably) been tested using
        conventional software running on conventional architectures.

Gad...that's really provincial: "deep thought, hard work, followed by
VLSI implementation."  Are you willing to wait a millenia or two while
your VAX grinds through the development and testing of a truly 
high-velocity AI system?

        If we can master knowledge representation and learning, we
        can begin to get away from programming by full analysis of
        every part of every algorithm needed for every task in a
        domain.  That would speed up our progress more than new
        architectures.

I agree.  I also agree with you that hardware is not in itself a
solution and that we need more thought put to the problems of building
intelligent systems.  What I am trying to point out, however, is that
we need integrated hardware/software solutions.  Highly parallel
computer systems will become a necessity, not only for research but
for implementation.

                                                        - Speaker
-- Full-Name:  Speaker-To-Animals
Csnet:  speaker@umcp-cs
Arpa:   speaker.umcp-cs@UDel-Relay

This must be hell...all I can see are flames... towering flames!

------------------------------

Date: 19 Sep 83 9:36:35-PDT (Mon)
From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: AI and Architecture
Article-I.D.: ncsu.2338


    Sheesh.  Everyone seems so excited about whether a parallel machine
is or will lead to fundamentally new things.  I agree with someone's
comment that conceptually time-sharing and multi-programming have
been conceptually quite parellel "virtual" machines for some time.
Just more and cheaper of the same.  Perhaps the added availability
will lead someone to have a good idea or two about how to do
something better -- in that sense it seems certain that something
good will come of proliferation and popularization of parallelism.
But for my money, there is nothing really, fundamentally different.

    Unless it is non-determinism.  Parallel system tend to be less
deterministic then their simplex brethern, though vast effort are
usually expended in an effort to stamp out this property.  Take me
for example: I am VERY non-deterministic (just ask my wife) and yet I
am also smarter then a lot of AI programs.  The break thru in AI/Arch
will, in my non-determined opinion, come when people stop trying to
sqeeze paralle systems into the more restricted modes of simplex
systems, and develop new paradigms for how to let such a system spred
its wings in a dimension OTHER THAN performance.  From a pragmatic
view, I think this will not happen until people take error recovery
and exception processing more seriously, since there is a fine line
between an error and a new thought ....
    ----GaryFostel----

------------------------------

Date: 20 Sep 83 18:12:15 PDT (Tuesday)
From: Bruce Hamilton <Hamilton.ES@PARC-MAXC.ARPA>
Reply-to: Hamilton.ES@PARC-MAXC.ARPA
Subject: Rutgers technical reports

This is probably of general interest.  --Bruce

    From: PETTY@RUTGERS.ARPA
    Subject: 1983 abstract mailing

Below is a list of our newest technical reports.

The abstracts for these are available for access via FTP with user 
account <anonymous> with any password.  The file name is:

        <library>tecrpts-online.doc

If you wish to order copies of any of these reports please send mail 
via the ARPANET to LOUNGO@RUTGERS or PETTY@RUTGERS.  Thank you!!


CBM-TR-128 EVOLUTION OF A PLAN GENERATION SYSTEM, N.S.  Sridharan,
J.L.  Bresina and C.F. Schmidt.

CBM-TR-133 KNOWLEDGE STRUCTURES FOR A MODULAR PLANNING SYSTEM, 
N.S.  Sridharan and J.L. Bresina.

CBM-TR-134 A MECHANISM FOR THE MANAGEMENT OF PARTIAL AND 
INDEFINITE DESCRIPTIONS, N.S. Sridharan and J.L. Bresina.

DCS-TR-126 HEURISTICS FOR FINDING A MAXIMUM NUMBER OF DISJOINT 
BOUNDED BATHS, D. Ronen and Y. Perl.

DCS-TR-127 THE BALANCED SORTING NETWORK,M. Dowd, Y. Perl, L.  
Rudolph and M. Saks.

DCS-TR-128 SOLVING THE GENERAL CONSISTENT LABELING (OR CONSTRAINT 
SATISFACTION) PROBLEM: TWO ALGORITHMS AND THEIR EXPECTED COMPLEXITIES,
B. Nudel.

DCS-TR-129 FOURIER METHODS IN COMPUTATIONAL FLUID AND FIELD 
DYNAMICS, R. Vichnevetsky.

DCS-TR-130 DESIGN AND ANALYSIS OF PROTECTION SCHEMES BASED ON THE 
SEND-RECEIVE TRANSPORT MECHANISM, (Thesis) R.S.  Sandhu.  (If you wish
to order this thesis, a pre-payment of $15.00 is required.)

DCS-TR-131 INCREMENTAL DATA FLOW ANALYSIS ALGORITHMS, M.C.  Paull 
and B.G.  Ryder.

DCS-TR-132 HIGH ORDER NUMERICAL SOMMERFELD BOUNDARY CONDITIONS:  
THEORY AND EXPERIMENTS, R. Vichnevetsky and E.C. Pariser.

LCSR-TR-43 NUMERICAL METHODS FOR BASIC SOLUTIONS OF GENERALIZED 
FLOW NETWORKS, M. Grigoriadis and T. Hsu.

LCSR-TR-44 LEARNING BY RE-EXPRESSING CONCEPTS FOR EFFICIENT 
RECOGNITION, R. Keller.

LCSR-TR-45 LEARNING AND PROBLEM SOLVING, T.M. Mitchell.

LRP-TR-15 CONCEPT LEARNING BY BUILDING AND APPLYING 
TRANSFORMATIONS BETWEEN OBJECT DESCRIPTIONS, D. Nagel.

------------------------------

End of AIList Digest
********************

∂25-Sep-83  1736	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #62
Received: from SRI-AI by SU-AI with TCP/SMTP; 25 Sep 83  17:35:28 PDT
Date: Sunday, September 25, 1983 4:27PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #62
To: AIList@SRI-AI


AIList Digest            Sunday, 25 Sep 1983       Volume 1 : Issue 62

Today's Topics:
  Language Understanding & Scientific Method,
  Conferences - COLING 84
----------------------------------------------------------------------

Date: 19 Sep 83 17:50:32-PDT (Mon)
From: harpo!utah-cs!shebs @ Ucb-Vax
Subject: Re: Natural Language Understanding
Article-I.D.: utah-cs.1914

Lest usenet readers think things had gotten silent all at once, here's
an article by Fernando Pereira that (apparently and inexplicably) was
*not* sent to usenet, and my reply (fortunately, I now have read-only
access to Arpanet, so I was able to find out about this) 
                        ←←←←←←←←←←←←←←←←←←←←←

    Date: Wed 31 Aug 83 18:42:08-PDT
    From: PEREIRA@SRI-AI.ARPA
    Subject: Solutions of the natural language analysis problem

[I will abbreviate the following since it was distributed in V1 #53
on Sep. 1.  -- KIL]

Given the downhill trend of some contributions on natural language 
analysis in this group, this is my last comment on the topic, and is 
essentially an answer to Stan the leprechaun hacker (STLH for short).

[...]

Lack of rigor follows from lack of method. STLH tries to bludgeon us 
with "generating *all* the possible meanings" of a sentence.  Does he 
mean ALL of the INFINITY of meanings a sentence has in general? Even 
leaving aside model-theoretic considerations, we are all familiar with

        he wanted me to believe P so he said P
        he wanted me to believe not P so he said P because he thought
           that I would think that he said P just for me to believe P
           and not believe it
        and so on ...

in spy stories.

[...]

Fernando Pereira
                         ←←←←←←←←←←←←←←←←←←←

The level of discussion *has* degenerated somewhat, so let me try to
bring it back up again.  I was originally hoping to stimulate some
debate about certain assumptions involved in NLP, but instead I seem
to see a lot of dogma, which is *very* dismaying.  Young idealistic me
thought that AI would be the field where the most original thought was
taking place, but instead everyone seems to be divided into warring
factions, each of whom refuses to accept the validity of anybody
else's approach.  Hardly seems scientific to me, and certainly other
sciences don't evidence this problem (perhaps there's some fundamental
truth here - that the nature of epistemology and other AI activities
are such that it's very difficult to prevent one's thought from being
trapped into certain patterns - I know I've been caught a couple
times, and it was hard to break out of the habit - more on that later)

As a colleague of mine put it, we seem to be suffering from a 
"difference in context".  So let me describe the assumptions 
underpinning my theory (yes I do have one):

1. Language is a very fuzzy thing.  More precisely, the set of sound
strings meaningful to a human is almost (if not exactly) the set of
all possible sound strings.  Now, before you flame, consider:  Humans
can get at least *some* understanding out of a nonsense sequence,
especially if they have any expectations about what they're hearing
(this has been demonstrated experimentally) although it will likely be
wrong.  Also, they can understand mispronounced or misspelled words,
sentences with missing words, sentences with repeated words, sentences
with scrambled word order, sentences with mixed languages (I used to
have fun by speaking English using German syntax, and you can
sometimes see signs using English syntax with "German" words), and so
forth.  Language is also used creatively (especially netters!).  Words
are continually invented, metaphors are created and mixed in novel
ways. I claim that there is no rule of grammar that cannot be
violated.  Note that I have said *nothing* about changes of meaning,
nor have I claimed that one could get much of anything out of a random
sequence of words strung together.  I have only claimed that the set
of linguistically valid utterances is actually a large fuzzy set (in
the technical sense of "fuzzy").  If you accept this, the implications
for grammar are far-reaching
- in fact, it may be that classical grammar is a curious but basically
irrelevant description of language (however, I'm not completely
convinced of that).

2. Meaning and interpretation are distinct.  Perhaps I should follow
convention and say "s-meaning" and "s-interpretation", to avoid
terminology trouble.  I think it's noncontroversial that the "true
meaning" of an utterance can be defined as the totality of response to
that utterance.  In that case, s-meaning is the individual-independent
portion of meaning (I know, that's pretty vague.  But would saying
that 51% of all humans must agree on a meaning make it any more
precise?  Or that there must be a predicate to represent that meaning?
Who decides which predicate is appropriate?).  Then s-interpretation
is the component that depends primarily on the individual and his
knowledge, etc.

Let's consider an example - "John kicked the bucket."  For most 
people, this has two s-meanings - the usual one derived directly from
the words and an idiomatic way of saying "John died".  Of course,
someone may not know the idiom, so they can assign only one s-meaning.
But as Mr. Pereira correctly points out, there are an infinitude of
s-interpretations, which will completely vary from individual to
individual.  Most can be derived from the s-meaning, for instance the
convoluted inferences about belief and intention that Mr. Pereira
gave.  On the other hand, I don't normally make those
s-interpretations, and a "naive" person might *never* do so.  Other
parts of the s-interpretation could be (if the second s-meaning above
was intended) that the speaker tends to be rather blunt; certainly a
part of the response to the utterance, but is less clearly part of a
"meaning".  Even s- meanings are pretty volatile though - to use
another spy story example, the sentence might actually be a code
phrase with a completely arbitrary meaning!

3. Cognitive science is relevant to NLP.  Let me be the first to say
that all of its results are at best suspect.  However, the apparent
inclination of many AI people to regard the study of human cognition
as "unscientific" is inexplicable.  I won't claim that my program
defines human cognition, since that degree of hubris requires at least
a PhD :-) .  But cognitive science does have useful results, like the
aforementioned result about making sense out of nonsense.  Also, lot
of common-sense results can be more accurately described by doing
experiments.  "Don't think of a zebra for the next ten minutes" - my
informal experimentation indicates that *nobody* is capable - that
seems to say a lot about how humans operate.  Perhaps cognitive
science gets a bad review because much of it is Gedanken experiments;
I don't need tests on a thousand subjects to know that most kinds of 
ungrammaticality (such as number agreement) are noticeable, but rarely
affect my understanding of a sentence.  That's why I say that humans
are experts at their own languages - we all (at least intuitively)
understand the different parts of speech and how sentences are put
together, even though we have difficulty expressing that knowledge
(sounds like the knowledge engineer's problems in dealing with
experts!).  BTW, we *have* had a non- expert (a CS undergrad) add
knowledge to our NLP system, and the folks at Berkeley have reported
similar results [Wilensky81].

4.  Theories should reflect reality.  This is especially important
because the reverse is quite pernicious - one ignores or discounts
information not conforming to one's theories.  The equations of motion
are fine for slow-speed behavior, but fail as one approaches c (the
language or the velocity? :-) ).  Does this mean that Lorenz
contractions are experimental anomalies?  The grammar theory of
language is fine for very restricted subsets of language, but is less
satisfactory for explaining the phenomena mentioned in 1., nor does it
suggest how organisms *learn* language.  Mr. Pereira's suggestion that
I do not have any kind of theoretical basis makes me wonder if he
knows what Phrase Analysis *is*, let alone its justification.
Wilensky and Arens of UCB have IJCAI-81 papers (and tech reports) that
justify the method much better than I possibly could.  My own
improvement was to make it follow multiple lines of parsing (have to
be contrite on this; I read Winograd's new book recently and what I
have is really a sort of active chart parser; also noticed that he
gives nary a mention to Phrase Analysis, which is inexcusable - that's
the sort of thing I mean by "warring factions").

4a.  Reflecting reality means "all of it" or (less preferable) "as
much as possible".  Most of the "soft sciences" get their bad 
reputation by disregarding this principle, and AI seems to have a 
problem with that also.  What good is a language theory that cannot
account for language learning, creative use of language, and the
incredible robustness of language understanding?  The definition of
language by grammar cannot properly explain these - the first because
of results (again mentioned by Winograd) that children receive almost
no negative examples, and that a grammar cannot be learned from
positive examples alone, the third because the grammar must be
extended and extended until it recognizes all strings as valid.  So
perhaps the classical notion of grammar is like classical mechanics -
useful for simple things, but not so good for photon drives or
complete NLP systems.  The basic notions in NLP have been thoroughly
investigated;

IT'S TIME TO DEVELOP THEORIES THAT CAN EXPLAIN *ALL* ASPECTS OF 
LANGUAGE BEHAVIOR!


5. The existence of "infinite garden-pathing".  To steal an example
from [Wilensky80],

        John gave Mary a piece of his.........................mind.

Only the last word disambiguates the sentence.  So now, what did *you*
fill in, before you read that last word?  There's even more 
interesting situations.  Part of my secret research agenda (don't tell
Boeing!) has been the understanding of jokes, particularly word plays.
Many jokes are multi-sentence versions of garden- pathing, where only
the punch line disambiguates.  A surprising number of crummy sitcoms
can get a whole half-hour because an ambiguous sentence is interpreted
differently by two people (a random thought - where *did* this notion
of sentence as fundamental structure come from?  Why don't speeches
and discourses have a "grammar" precisely defining *their* 
structure?).  In general, language is LR(lazy eight).

Miscellaneous comments:

This has gotten pretty long (a lot of accusations to respond to!), so
I'll save the discussion of AI dogma, fads, etc for another article.

When I said that "problems are really concerned with the acquisition
of linguistic knowledge", that was actually an awkward way to say
that, having solved the parsing problem, my research interests
switched to the implementation of full-scale error correction and
language learning (notice that Mr. Pereira did not say "this is
ambiguous - what did you mean?", he just assumed one of the meanings
and went on from there.  Typical human language behavior, and
inadequately explained by most existing theories...).  In fact, I have
a detailed plan for implementation, but grad school has interrupted
that and it may be a while before it gets done.  So far as I can tell,
the implementation of learning will not be unusually difficult.  It 
will involve inductive learning, manipulation of analogical 
representations to acquire meanings ("an mtrans is like a ptrans, but
with abstract objects"....), and other good things.  The 
nonrestrictive nature of Phrase Analysis seems to be particularly 
well-suited to language knowledge acquisition.

Thanks to Winograd (really quite a good book, but biased) I now know
what DCG's are (the paper I referred to before was [Pereira80]).  One
of the first paragraphs in that paper was revealing.  It said that
language was *defined* by a grammar, then proceeded from there.
(Different assumptions....) Since DCG's were compared only to ATN's,
it was of course easy to show that they were better (almost any
formalism is better than one from ten years before, so that wasn't
quite fair).  However, I fail to see any important distinction between
a DCG and a production rule system with backtracking.  In that case, a
DCG is really a special case of a Phrase Analysis parser (I did at one
time tinker with the notion of compiling phrase rules into OPS5 rules,
but OPS5 couldn't manage it very well - no capacity for the
parallelism that my parser needed).  I am of course interested in
being contradicted on any of this.

Mr. Pereira says he doesn't know what the "Schank camp" is.  If that's
so then he's the only one in NLP who doesn't.  I have heard some
highly uncomplimentary comments about Schank and his students.  But
then that's the price for going against conventional wisdom...

Sorry for the length, but it *was* time for some light rather than
heat!  I have refrained from saying much of anything about my theories
of language understanding, but will post details if accusations
warrant :-)

                                Theoretically yours*,
                                Stan (the leprechaun hacker) Shebs
                                utah-cs!shebs

* love those double meanings!

[Pereira80] Pereira, F.C.N., and Warren, D.H.D. "Definite Clause
    Grammars for Language Analysis - A Survey of the Formalism and
    a Comparison with Augmented Transition Networks", Artificial
    Intelligence 13 (1980), pp 231-278.

[Wilensky80] Wilensky, R. and Arens, Y.  PHRAN: A Knowledge-based
    Approach to Natural Language Analysis (Memorandum No.
    UCB/ERL M80/34).  University of California, Berkeley, 1980.

[Wilensky81] Wilensky, R. and Morgan, M.  One Analyzer for Three
    Languages (Memorandum No. UCB/ERL M81/67). University of
    California, Berkeley, 1981.

[Winograd83] Winograd, T.  Language as a Cognitive Process, vol. 1:
    Syntax.  Addison-Wesley, 1983.

------------------------------

Date: Fri 23 Sep 83 14:34:44-CDT
From: Lauri Karttunen <Cgs.Lauri@UTEXAS-20.ARPA>
Subject: COLING 84 -- Call for papers

               [Reprinted from the UTexas-20 bboard.]


                              CALL FOR PAPERS

   COLING 84, TENTH INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS

COLING 84 is scheduled for 2-6 July 1984 at Stanford University,
Stanford, California.  It will also constitute the 22nd Annual Meeting
of the Association for Computational Linguistics, which will host the
conference.

Papers for the meeting are solicited on linguistically and
computationally significant topics, including but not limited to the
following:

   o Machine translation and machine-aided translation.

   o Computational applications in syntax, semantics, anaphora, and
       discourse.

   o Knowledge representation.

   o Speech analysis, synthesis, recognition, and understanding.

   o Phonological and morpho-syntactic analysis.

   o Algorithms.

   o Computational models of linguistic theories.

   o Parsing and generation.

   o Lexicology and lexicography.

Authors wishing to present a paper should submit five copies of a
summary not more than eight double-spaced pages long, by 9 January
1984 to: Prof.  Yorick Wilks, Languages and Linguistics, University of
Essex, Colchester, Essex, CO4 3SQ, ENGLAND [phone: 44-(206)862 286;
telex 98440 (UNILIB G)].

It is important that the summary contain sufficient information,
including references to relevant literature, to convey the new ideas
and allow the program committee to determine the scope of the work.
Authors should clearly indicate to what extent the work is complete
and, if relevant, to what extent it has been implemented.  A summary
exceeding eight double-spaced pages in length may not receive the
attention it deserves.

Authors will be notified of the acceptance of their papers by 2 April
1984.  Full length versions of accepted papers should be sent by 14
May 1984 to Dr. Donald Walker, COLING 84, SRI International, Menlo
Park, California, 94025, USA [phone: 1-(415)859-3071; arpanet:
walker@sri-ai].

Other requests for information should be addressed to Dr. Martin Kay,
Xerox PARC, 3333 Coyote Hill Road, Palo Alto, California 94304, USA 
[phone: 1-(415)494-4428; arpanet: kay@parc].


------------------------------

End of AIList Digest
********************

∂25-Sep-83  2055	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #63
Received: from SRI-AI by SU-AI with TCP/SMTP; 25 Sep 83  20:54:48 PDT
Date: Sunday, September 25, 1983 7:47PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #63
To: AIList@SRI-AI


AIList Digest            Monday, 26 Sep 1983       Volume 1 : Issue 63

Today's Topics:
  Robotics - Physical Strength,
  Parallelism & Physiology,
  Intelligence - Turing Test,
  Learning & Knowledge Representation,
  Rational Psychology
----------------------------------------------------------------------

Date: 21 Sep 83 11:50:31-PDT (Wed)
From: ihnp4!mtplx1!washu!eric @ Ucb-Vax
Subject: Re: Strong, agile robot
Article-I.D.: washu.132

I just glanced at that article for a moment, noting the leg mechanism 
detail drawing.  It did not seem to me that the beastie could move
very fast.  Very strong IS nice, tho...  Anyway, the local supplier of
that mag sold them all.  Anyone remember if it said how fast it could
move, and with what payload?

eric ..!ihnp4!washu!eric

------------------------------

Date: 23 Sep 1983 0043-PDT
From: FC01@USC-ECL
Subject: Parallelism

I thought I might point out that virtually no machine built in the
last 20 years is actually lacking in parallelism. In reality, just as
the brain has many neurons firing at any given time, computers have
many transistors switching at any given time. Just as the cerebellum
is able to maintain balance without the higher brain functions in the
cerebrum explicitly controlling the IO, most current computers have IO
controllers capable of handling IO while the CPU does other things.
Just as people have faster short term memory than long term memory but
less of it, computers have faster short term memory than long term 
memory and use less of it. These are all results of cost/benefit
tradeoffs for each implementation, just as I presume our brains and
bodies are. Don't be so fast to think that real computer designers are
ignorant of physiology. The trend towards parallelism now is more like
the human social system of having a company work on a problem. Many
brains, each talking to each other when they have questions or
results, each working on different aspects of a problem. Some people
have breakdowns, but the organization keeps going. Eventually it comes
up with a product, although it may not really solve the problem posed
at the beginning, it may have solved a related problem or found a
better problem to solve.

        Another copyrighted excerpt from my not yet finished book on
computer engineering modified for the network bboards, I am ever
yours,
                                        Fred

------------------------------

Date: 14 Sep 83 22:46:10-PDT (Wed)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: in defense of Turing - (nf)
Article-I.D.: uiucdcs.2822



Two points where Martin Taylor's response reveals that I was not
emphatic enough [you see, it is possible to underflame, and thus be 
misunderstood!] in my comments on the Turing test.

1. One of Dennett's main points (which I did not mention, since David
Rogers had already posted it in the original note of this string) is
that the unrestricted Turing-like test of which he spoke is a
SUFFICIENT, but not a NECESSARY test for intelligence comparable to
that possessed and displayed by most humans in good working order.  [I
myself would add that it tests as much for mastery of human
communication skills (which are indeed highly dependent on particular
cultures) as it does for intelligence.] That is to say, if a program
passes such a rigorous test, then the practitioners of AI may
congratulate themselves for having built such a clever beast.
However, a program which fails such a test need not be considered
unintelligent.  Indeed, a human which fails such a test need not be
considered unintelligent -- although one would probably consider
him/her to be of substandard intelligence, or of impaired
intelligence, or dyslexic, or incoherent, or unconscious, or amnesic,
or aphasic, or drunk (i.e. disabled in some fashion).

2. I did not post "a set of criteria which an AI system should pass to
be accepted as human-like at a variety of levels."  I posted a set of
tests by which to gauge progress in the field of AI.  I don't imagine
that these tests have anything to do with human-ness.  I also don't
imagine that many people who discuss and discourse upon "intelligence"
have any coherent definition for what it might be.


Other comments that seem relevant (but might not be)
----- -------- ---- ---- -------- ---- ----- --- ---

Neither Dennett's test, nor my tests are intended to discern whether
or not the entity in question possesses a human brain.

In addition to flagrant use of hindsight, my tests also reveal my bias
that science is an endeavor which requires intelligence on the part of
its human practitioners.  I don't mean to imply that it is the only
such domain.  Other domains which require that the people who live in 
them have "smarts" are puzzle solving, language using, language
learning (both first and second), etc.  Other tasks not large enough
to qualify as domains that require intelligence (of a degree) from
people who do them include: figuring out how to use a paper clip or a
stapler (without being told or shown), figuring out that someone was 
showing you how to use a stapler (without being told that such
instruction was being given), improvising a new tool or method for a
routine task that one is accustomed to doing with an old tool or
method, realizing that an old method needs improvement, etc.

The interdependence of intelligence and culture is much more important
that we usually give it credit for.  Margaret Mead must have been
quite a curiousity to the peoples she studied.  Imagine that a person
of such a different and strange (to us) culture could be made to
understand enough about machines and the Turing test so that he/she 
could be convinced to serve as an interlocutor...  On second thought,
that opens up such a can of worms that I'd rather deny having proposed
it in the first place.

------------------------------

Date: 19 Sep 83 17:43:53-PDT (Mon)
From: harpo!utah-cs!shebs @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: utah-cs.1913

I just read Jon Doyle's article about Rational Psychology in the
latest AI Magazine (Fall '83), and am also very interested in the
ideas therein.  The notion of trying to find out what is *possible* 
for intelligences is very intriguing, not to mention the idea of
developing some really sound theories for a change.

Perhaps I could mention something I worked on a while back that
appears to be related.  Empirical work in machine learning suggests
that there are different levels of learning - learning by being
programmed, learning by being told, learning by example, and so forth,
with the levels being ordered by their "power" or "complexity",
whatever that means.  My question: is there something fundamental
about this classification?  Are there other levels?  Is there a "most
powerful" form of learning, and if so, what is it?

I took the approach of defining "learning" as "behavior modification",
even though that includes forgetting (!), since I wasn't really
concerned with whether the learning resulted in an "improvement" in
behavior or not.  The model of behavior was somewhat interesting.
It's kind of a dualistic thing, consisting of two entities:  the
organism and the environment.  The environment is everything outside,
including the organsism's own physical body, while the organism is
more or less equivalent to a mind.  Each of these has a state, and
behavior can be defined as functions mapping the set of all states to
itself.  Both the environment and the organism have behaviors that can
be treated in the same way (that is, they are like mirror images of
each other).  The whole development is too elaborate for an ASCII
terminal, but it boiled down to this:  that since learning is a part
of behavior, but it also *modifies* behavior, then there is a part of
the behavior function that is self-modifying.  One can then define
"1st order learning" as that which modifies ordinary behavior.  2nd
order learning would be "learning how to learn", 3rd order would be
"learning how to learn how to learn" (whatever *that* means!).  The
definition of these is more precise than my Anglicization here, and
seem to indicate a whole infinite heirarchy of learning types, each
supposedly more powerful than the last.  It doesn't do much for my
original questions, because the usual types of learning are all 1st
order - although they don't have to be.  Lenat's work on learning
heuristics might be considered 2nd order, and if you look at it in the
right way, it may actually be that EURISKO actually implements all
orders of learning at the same time, so the above discussion is 
garbage (sigh).

Another question that has concerned me greatly (particularly since
building my parser) is the relation of the Halting Problem to AI.  My
program was basically a production system, and had an annoying
tendency to get caught in infinite loops of various sorts.  More
misfeatures than bugs, though, since the theory did not expressly
forbid such loops!  To take a more general example, why don't circular
definitions cause humans to go catatonic?  What is the mechanism that
seems to cut off looping?  Do humans really beat the Halting Problem?
One possible mechanism is that repetition is boring, and so all loops
are cut off at some point or else pushed so far down on the agenda of
activities that they are effectively terminated.  What kind of theory
could explain this?

Yet another (last one folks!) question is one that I raised a while
back, about all representations reducing down to attribute-value
pairs.  Yes, they used to be fashionable but are now out of style, but
I'm talking about a very deep underlying representation, in the same
way that the syntax of s-expressions underlies Lisp.  Counterexamples 
to my conjecture about AV-pairs being universal were algebraic 
expressions (which can be turned into s-expressions, which can be 
turned into AV-pairs) and continuous values, but they must have *some*
closed form representation, which can then be reduced to AV-pairs.  So
I remained unconvinced that the notion of objects with AV-pairs 
attached is *not* universal (of course, for some things, the
representation is so primitive as to be as bad as Fortran, but then
this is an issue of possibility, not of goodness or efficiency).

Looking forward to comments on all of these questions...

                                        stan the l.h.
                                        utah-cs!shebs

------------------------------

Date: 22 Sep 83 11:26:47-PDT (Thu)
From: ihnp4!drux3!drufl!samir @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: drufl.663

        To me personally, Rational Psychology is a misnomer.
"Rational" negates what "Psychology" wants to understand.

Flames to /dev/null.
Interesting discussions welcome.


                                    Samir Shah
                                    drufl!samir
                                    AT&T Information Systems, Denver.

------------------------------

Date: 22 Sep 83 17:12:11-PDT (Thu)
From: ihnp4!houxm!hogpc!houti!ariel!norm @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: ariel.456

         Samir's view:  "To me personally, Rational Psychology
                         is a misnomer. "Rational" negates
                         what "Psychology" wants to understand."


How so?

Can you support your claim? What does psychology want to understand
that Rationality negates?  Psychology is the Logos of the Psyche or
the logic of the psyche.  How does one understand without logic?  How
does one understand without rationality?  What is understand?  Isn't
language itself dependent upon the rational faculty, or more
specifically, upon the ability to form concepts, as opposed to
percepts?  Can you understand without language?  To be totally without
rationality (lacking the functional capacity for rationality
- the CONCEPTUAL faculty) would leave you without language, and
therefore without understanding.  In what TERMS is something said to
be understood?  How can terms have meaning without rationality?

Or perhaps you might claim that because men are not always rational
that man does not possess a rational faculty, or that it is defective,
or inadequate?  How about telling us WHY you think Rational negates
Psychology?

These issues are important to AI, psychology and philosophy
students...  The day may not be far off when AI research yields
methods of feature abstraction and integration that approximate
percept-formation in humans.  The next step, concept formation, will
be much harder.  How does an epistemology come about?  What are the
sequential steps necessary to form an epistemology of any kind?  By
what method does the mind (what's that?) integrate percepts into
concepts, make identifications on a conceptual level ("It is an X"),
justify its identifications ("and I know it is an X because..."), and
then decide (what's that?) what to do about it ("...so therefore I
should do Y")?

Do you seriously think that understanding these things won't take
Rationality?

Norm Andrews, AT&T Information Systems, Holmdel, N.J. ariel!norm

------------------------------

Date: 22 Sep 83 12:02:28-PDT (Thu)
From: decvax!genrad!mit-eddie!mit-vax!eagle!mhuxi!mhuxj!mhuxl!achilles
      !ulysses!princeton!leei@Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: princeto.77

I really think that the ability that we humans have that allows us to
avoid looping is the simple ability to recognize a loop in our logic
when it happens.  This comes as a direct result of our tendency for
constant self- inspection and self-evaluation.  A machine with this
ability, and the ability to inspect its own self-inspections . . .,
would probably also be able to "solve" the halting problem.

Of course, if the loop is too subtle or deep, then even we cannot see
it.  This may explain the continued presence of various belief systems
that rely on inherently circular logic to get past their fundamental
problems.


                                        -Lee Iverson
                                        ..!princeton!leei

------------------------------

End of AIList Digest
********************

∂26-Sep-83  2348	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #64
Received: from SRI-AI by SU-AI with TCP/SMTP; 26 Sep 83  23:47:27 PDT
Date: Monday, September 26, 1983 9:28PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #64
To: AIList@SRI-AI


AIList Digest            Tuesday, 27 Sep 1983      Volume 1 : Issue 64

Today's Topics:
  Database Systems - DBMS Software Available,
  Symbolic Algebra - Request for PRESS,
  Humor - New Expert Systems,
  AI at Edinburgh - Michie & Turing Institute,
  Rational Psychology - Definition,
  Halting Problem & Learning,
  Knowledge Representation - Course Announcement
----------------------------------------------------------------------

Date: 21 Sep 83 16:17:08-PDT (Wed)
From: decvax!wivax!linus!philabs!seismo!hao!csu-cs!denelcor!pocha@Ucb-Vax
Subject: DBMS Software Available
Article-I.D.: denelcor.150

Here are 48 vendors of the most popular DBMS packages which will be presented
at the National Database & 4th Generation Language Symposium.
Boston, Dec. 5-8 1983, Radisson-Ferncroft Hotel, 50 Ferncroft Rd., Davers, Ma
For information write. Software Institute of America, 339 Salem St, Wakefield
Mass 01880 (617)246-4280.
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
Applied Data Research   DATACOM, IDEAL |Mathamatica Products    RAMIS II
Battelle - - - - - - -  BASIS          |Manager Software Prod.  DATAMANAGER
Britton-Lee             IDM            |                        DESIGNMANAGER
Cincom Systems          TIS, TOTAL,    |                        SOURCEMANAGER
                        MANTIS         |National CSS, Inc.      NOMAD2
Computer Associates     CA-UNIVERSE    |Oracle Corp.            ORACLE
Computer Co. of America MODEL 204      |Perkin-Elmer            RELIANCE
                        PRODUCT LINE   |Prime Computer          PRIME DBMS
Computer Techniques     QUEO-IV        |                        INFORMATION
Contel - - - - - - - -  RTFILE         |Quassar Systems         POWERHOUSE
Cullinet Software       IDMS, ADS      |                        POWERPLAN
Database Design, Inc.   DATA DESIGNER  |Relational Tech. Inc.   INGRES
Data General            DG/DBMS        |Rexcom Corp.            REXCOM
                        PRESENT        |Scientific Information  SIR/DBMS
Digital Equipment Co.   VAX INFO. ARCH |Seed Software           SEED
Exact Systems & Prog.   DNA-4          |Sensor Based System     METAFILE
Henco Inc.              INFO           |Software AG of N.A.     ADABAS
Hewlett Packard         IMAGE          |Software House          SYSTEM 1022
IBM Corp.               SQL/DS, DB2    |Sydney Development Co.  CONQUER
Infodata Systems        INQUIRE        |Tandem Computers        ENCOMPASS
Information Builders    FOCUS          |Tech. Info. Products    IP/3
Intel Systems Corp.     SYSTEM 2000    |Tominy, Inc.            DATA BASE-PLUS
←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←
                                     John Pocha
                                    Denelcor, Inc.
                                    17000 E. Ohio Place
                                    Aurora, Colorado 80017
                                    work (303)337-7900 x379
                                    home (303)794-5190
                                 {csu-cs|nbires|brl-bmd}!denelcor!pocha

------------------------------

Date: 23 Sep 83 19:04:12-PDT (Fri)
From: decvax!tektronix!tekchips!wm @ Ucb-Vax
Subject: Request for PRESS
Article-I.D.: tekchips.317

Does anyone know where I can get the PRESS algebra system, by Alan
Bundy, written in Prolog?

                        Wm Leler
                        tektronix!tekchips!wm
                        wm.Tektronix@Rand-relay

------------------------------

Date: 23 Sep 83 1910 EDT (Friday)
From: Jeff.Shrager@CMU-CS-A
Subject: New expert systems announced:

Dentrol: A dental expert system based upon tooth maintenance
      principles.
Faust: A black magic advisor with mixed initiative goal generation.
Doug: A system which will convert any given domain into set theory.
Cray: An expert arithmetic advisor.  Heuristics exist for any sort of
      real number computation involving arithmetic functions (+, -,
      and several others) within a finite (but large) range around 0.0.
      The heuristics are shown to be correct for typical cases.
Meta: An expert at thinking up new domains in which there should be
      expert systems.
Flamer: A expert at seeming to be an expert in any domain in which it
      is not an expert.
IT: (The Illogic Theorist) A expert at fitting any theory to any quanity
    of protocol data.  Theories must be specified in "ITLisp" but IT can
    construct the protocols if need be.

------------------------------

Date: 22 Sep 83 23:25:15-PDT (Thu)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: U of Edinburgh, Scotland Inquiry - (nf)
Article-I.D.: uiucdcs.2935


I can't tell you about the Dept of AI at Edinburgh, but I do know
about the Machine Intelligence Research Unit chaired by Prof. Donald
Michie.

The MIRU will fold in future, because Prof Michie intends to set up a
new research institute in the UK. He's been planning this and fighting
for it for quite a while now. It will be called the "Turing
Institute", and is intended to become one of the prime centers of AI
research in the UK. In fact, it will be one of the very few centers at
which research is the top priority, rather than teaching. Michie has
recently been approached by the University of Strathclyde near
Glasgow, which is interested in functioning as the associated teaching
institution (cp SRI and Stanford). If that works out, the Turing 
Institute may be operational by September 1984.

------------------------------

Date: 23 Sep 83 5:04:46-PDT (Fri)
From: decvax!microsoft!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: ssc-vax.538

(should be posting from utah, but I saw it here first and just 
couldn't resist...)

I think we've got a terminology problem here.  The word "rational" is
so heavily loaded that it can hardly move! (as net.philosophy readers
well know).  The term "rational psychology" does seem to exclude
non-rational behavior (whatever that is) from consideration, which is
not true at all.  Rather, the idea is to explore the entire universe
of possibilities for intelligent behavior, rather than restricting
oneself to observing the average college sophomore or the AI programs
small enough to fit on present-day machines.

Let me propose the term "universal psychology" as a substitute, 
analogous to the mathematical study of universal algebras.  Fewer
connotations, and it better suggests the real thrust of this field -
the study of *possible* intelligent behavior.

                                stan the r.h. (of lightness)
                                ssc-vax!sts
                                (but mail to harpo!utah-cs!shebs)

------------------------------

Date: 26 Sep 1983 0012-PDT
From: Jay <JAY@USC-ECLC>
Subject: re: the halting problem, orders of learning

Certain representaions of calculations lead to easy
detection of looping.  Consider the function...
	f(x) = x
This could lead to ...
	f(f(x)) = x 
Or to ...
	f(f(f(f( ... )))) = x
But why bother!  Or for another example, consider the life blinker..
                   +
 + + +   becomes   +   becomes  + + +   becomes (etc.)
                   +
Why bother calculateing all the generations for this arangement?  The
same information lies in ...
for any integer i                         +
Blinker(2i) = + + +  and Blinker(2i+1) =  +
                                          +
There really is no halting problem, or infinite looping.  The
information for the blinker need not be fully decoded, it can be just
the above "formulas".  So humans could choses a representation of
circular or "infinite looping" ideas, so that the circularity is
expresed in a finite number of bits.

As for the orders of learning; Learning(1) is a behavior.  That is
modifying behaivor is a behavior.  It can be observed in schools,
concentration camps, or even in the laboratory.  So learning(2) is
modifying a certain behavior, and thus nothing more (in one view)
than learning(1).  Indeed it is just learning(1) applied to itself!
So learning(i) is just
                              i
(the way an organism modifies)  its behavior 

But since behavior is just the way an organism modifies the
enviroment,
                                            i+1
Learning(i) = (the way an organism modifies)    the enviroment.

and learning(0) is just behavior.  So depending on your view, there
are either an infinite number of ways to learn, or there are an
infinite number of organisms (most of whose enviroments are just other
organisms).

j'

------------------------------

Date: Mon 26 Sep 83 11:48:33-MDT
From: Jed Krohnfeldt <KROHNFELDT@UTAH-20.ARPA>
Subject: Re: learning levels, etc.

Some thoughts about Stan Shebs' questions:

I think that your continuum of 1st order learning, 2nd order learning,
etc. can really be collapsed to just two levels - the basic learning
level, and what has been popularly called the "meta level".  Learning
about learning about learning, is really no different than learning
about learning, is it?  It is simply a capability to introspect (and
possibly intervene) into basic learning processes.

This also proposes an answer to your second question - why don't 
humans go catatonic when presented with circular definitions - the
answer may be that we do have heuristics, or meta-level knowledge,
that prevents us from endlessly looping on circular concepts.

                                        Jed Krohnfeldt
                                         utah-cs!jed
                                       krohnfeldt@utah-20

------------------------------

Date: Mon 26 Sep 83 10:44:34-PDT
From: Bob Moore <BMOORE@SRI-AI.ARPA>
Subject: course announcement

                         COURSE ANNOUNCEMENT

                         COMPUTER SCIENCE 400

                REPRESENTATION, MEANING, AND INFERENCE


Instructor: Robert Moore
            Artificial Intelligence Center
            SRI International

Time:       MW @ 11:00-12:15 (first meeting Wed. 9/28)

Place:      Margaret Jacks Hall, Rm. 301


The problem of the formal representation of knowledge in intelligent
systems is subject to two important constraints.  First, a general
knowledge-representation formalism must be sufficiently expressive to
represent a wide variety of information about the world.  A long-term
goal here is the ability to represent anything that can be expressed
in natural language.  Second, the system must be able to draw
inferences from the knowledge represented.  In this course we will
examine the knowledge representation problem from the perspective of
these constraints.  We will survey techniques for automatically
drawing inferences from formalizations of commonsense knowledge; we
will look at some of the aspects of the meaning of natural-language
expressions that seem difficult to formalize (e.g., tense and aspect,
collective reference, propositional attitudes); and we will consider
some ways of bridging the gap between formalisms for which the
inference problem is fairly well understood (first-order predicate
logic) and the richer formalisms that have been proposed as meaning
representations for natural language (higher-order logics, intentional
and modal logics).

------------------------------

End of AIList Digest
********************

∂29-Sep-83  1120	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #65
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Sep 83  11:19:49 PDT
Date: Thursday, September 29, 1983 9:46AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #65
To: AIList@SRI-AI


AIList Digest           Thursday, 29 Sep 1983      Volume 1 : Issue 65

Today's Topics:
  Automatic Translation - French-to-English Request,
  Music and AI - Request,
  Publications - CSLI Newsletter & Apollo User's Mailing List,
  Seminar - Parallel Algorithms: Cook at UTexas Oct. 6,
  Lab Reports - UM Expansion,
  Software Distributions - Maryland Franz Lisp Code,
  Conferences - Intelligent Sys. and Machines, CSCSI,
----------------------------------------------------------------------

Date: Wed 28 Sep 83 11:37:27-PDT
From: David E.T. Foulser <FOULSER@SU-SCORE.ARPA>
Subject: Re: Automatic Translation


  I'm looking for a program to perform automatic translation from
French to English.  The output doesn't have to be perfect (I hardly
expect it).  I'll appreciate any leads you can give me.

                                Dave Foulser

------------------------------

Date: Wed 28 Sep 83 18:46:09-EDT
From: Ted Markowitz <TJM@COLUMBIA-20.ARPA>
Subject: Music & AI, pointers wanted

I'd like to hear from anyone doing work that somehow relates AI and
music in some fashion. Particularly, are folks using AI programs and
techniques in composition (perhaps as a composer's assistant)? Any
responses will be passed on to those interested in the results.

--ted

------------------------------

Date: Mon 26 Sep 83 12:08:44-CDT
From: Lauri Karttunen <Cgs.Lauri@UTEXAS-20.ARPA>
Subject: CSLI newsletter

                [Reprinted from the UTexas-20 bboard.]


A copy of the first newsletter from the Center for the Study of
Language and Information (CSLI) at Stanford is in
PS:<CGS.PUB>CSLI.NEWS.  The section on "Remote Affiliates" is of some
interest to many people here.

------------------------------

Date: Thu, 22 Sep 83 14:29:56 EDT
From: Nathaniel Mishkin <Mishkin@YALE.ARPA>
Subject: Apollo Users Mailing List

This message is to announce the creation of a new mailing list:

        Apollo@YALE

in which I would like to include all users of Apollo computers who are
interested in sharing their experiences about Apollos.  I think all
people could benefit from finding out what other people are doing on
their Apollos.

Mail to the list will be archived in some public place that I will 
announce at a later date.  At least initially, the list will not be 
moderated or digested.  If the volume is too great, this may change.  
If you are interested in getting on this mailing list, send mail to:

        Apollo-Request@YALE

If several people at your site are interested in being members and 
your mail system supports local redistribution, please tell me so I
can add a single entry (e.g. "Apollo-Podunk@PODUNK") instead of one
for each person.

------------------------------

Date: Mon 26 Sep 83 16:44:31-CDT
From: CS.GLORIA@UTEXAS-20.ARPA
Subject: Cook Colloquium, Oct 6

               [Reprinted from the UTexas-20 bboard.]


Stephen A. Cook, University of Toronto, will present a talk entitled
"Which Problems are Subject to Exponential Speed-up by Parallel Computers?"
on Thursday, Oct. 6 at 3:30 p.m. in Painter Hall 4.42.
Abstract:
      In the future we expect large parallel computers to exist with
thousands or millions of processors able to work together on a single
problem. There is already a significant literature of published algorithms
for such machines in which the number of processors available is treated
as a resource (generally polynomial in the input size) and the computation
time is extremely fast (polynomial in the logarithm of the input size).
We shall give many examples of problems for which such algorithms exist
and classify them according to the kind of algirithm which can be used.
On the other hand, we will give examples of problems with feasible sequential
algorithms which appear not to be amenable to such fast parallel algorithms.

------------------------------

Date: 21 Sep 83 16:33:08 EDT  (Wed)
From: Mark Weiser <mark%umcp-cs@UDel-Relay>
Subject: UM Expansion

[Due to a complaint that even academic job ads constitute an
"egregious violation" of Arpanet standards, and following failure of
anyone to reply to my subsequent queries, I have decided to publish
general notices of lab expansions but not specific positions.  The
following solicitation has been edited accordingly.  -- KIL]


The University of Maryland was recently awarded 4.2 million dollars
by the National Science Foundation to develop the hardware and
software for a parallel processing laboratory.  More than half of
the award amount is going directly for hardware acquisition, and
this money is also being leveraged through substantial vendor
discounts and joint research programs now being negotiated.  We
will be buying things like lots of Vaxes, Sun's, Lisp Machines,
etc., to augment our current 2 780's, ethernet, etc. system.
Several new permanent positions are being created in the Computer
Science Department for this laboratory.

[...]

Anyone interested should make initial inquiries, send resumes, etc.
to Mark Weiser at one of the addresses below:

        Mark Weiser
        Computer Science Department
        University of Maryland
        College Park, MD 20742
        (301) 454-6790/4251/6291 (in that order).
        UUCP:   {seismo,allegra,brl-bmd}!umcp-cs!mark
        CSNet:  mark@umcp-cs
        ARPA:   mark.umcp-cs@UDel-Relay

------------------------------

Date: 26 Sep 83 17:32:04-PDT (Mon)
From: decvax!mcvax!philabs!seismo!rlgvax!cvl!umcp-cs!liz @ Ucb-Vax
Subject: Maryland software distribution
Article-I.D.: umcp-cs.2755

This is to announce the availability of the Univ of Maryland software
distribution.  This includes source code for the following:

1.  The flavors package written in Franz Lisp.  This package has
    been used successfully in a number of large systems at Maryland,
    and while it does not implement all the features of Lisp Machine
    Flavors, the features present are as close to the Lisp Machine
    version as possible within the constraints of Franz Lisp.
    (Note that Maryland flavors code *can* be compiled.)
2.  Other Maryland Franz hacks including the INTERLISP-like top
    level, the lispbreak error handling package, the for macro and
    the new loader package.
3.  The YAPS production system written in Franz Lisp.  This is
    similar to OPS5 but more flexible in the kinds of lisp expressions
    that may appear as facts and patterns (sublists are allowed
    and flavor objects are treated atomically), the variety of
    tests that may appear in the left hand sides of rules and the
    kinds of actions may appear in the right hand sides of rules.
    In addition, YAPS allows multiple data bases which are flavor
    objects and may be sent messages such as "fact" and "goal".
4.  The windows package in the form of a C loadable library.  This
    flexible package allows convenient management of multiple
    contexts on the screen and runs on ordinary character display
    terminals as well as bit-mapped displays.  Included is a Franz
    lisp interface to the window library, a window shell for
    executing shell processes in windows, and a menu package (also
    a C loadable library).

You should be aware of the fact that the lisp software is based on
Franz Opus 38.26 and that we will be switching to the newer version
of lisp that comes with Berkeley 4.2 whenever that comes out.

---------------------------------------------------------------------

To obtain the Univ of Maryland distribution tape:

1.  Fill in the form below, make a hard copy of it and sign it.
2.  Make out a check to University of Maryland Foundation for $100,
    mail it and the form to:

                Liz Allen
                Univ of Maryland
                Dept of Computer Science
                College Park MD 20742

3.  If you need an invoice, send me mail, and I will get one to you.
    Don't forget to include your US Mail address.

Upon receipt of the money, we will mail you a tape containing our
software and the technical reports describing the software.  We
will also keep you informed of bug fixes via electronic mail.

---------------------------------------------------------------------

The form to mail to us is:


In exchange for the Maryland software tape, I certify to the
following:

a.  I will not use any of the Maryland software distribution in a
    commercial product without obtaining permission from Maryland
    first.
b.  I will keep the Maryland copyright notices in the source code,
    and acknowledge the source of the software in any use I make of
    it.
c.  I will not redistribute this software to anyone without permission
    from Maryland first.
d.  I will keep Maryland informed of any bug fixes.
e.  I am the appropriate person at my site who can make guarantees a-d.

                                Your signature, name, position,
                                phone number, U.S. and electronic
                                mail addresses.

---------------------------------------------------------------------

If you have any questions, etc, send mail to me.

--
                                -Liz Allen, U of Maryland, College Park MD
                                 Usenet:   ...!seismo!umcp-cs!liz
                                 Arpanet:  liz%umcp-cs@Udel-Relay

------------------------------

Date: Tue, 27 Sep 83 14:57:00 EDT
From: Morton A. Hirschberg <mort@brl-bmd>
Subject: Conference Announcement


              ****************  CONFERENCE  ****************

                     "Intelligent Systems and Machines"

                    Oakland University, Rochester Michigan

                                April 24-25, 1984

              *********************************************

A notice for call for papers should also appear through SIGART soon.

Conference Chairmen:  Dr. Donald Falkenburg (313-377-2218)
                      Dr. Nan Loh           (313-377-2222)
                      Center for Robotics and Advanced Automation
                      School of Engineering
                      Oakland University
                      Rochester, MI 48063
            ***************************************************

AUTHORS PLEASE NOTE:  A Public Release/Sensitivity Approval is necessary.
Authors from DOD, DOD contractors, and individuals whose work is government
funded must have their papers reviewed for public release and more
importantly sensitivity (i.e. an operations security review for sensitive
unclassified material) by the security office of their sponsoring agency.

In addition, I will try to answer questions for those on the net.  Mort
Queries can be sent to mort@brl

------------------------------

Date: Mon 26 Sep 83 11:08:58-PDT
From: Ray Perrault <RPERRAULT@SRI-AI.ARPA>
Subject: CSCSI call for papers

                         CALL FOR PAPERS

                         C S C S I - 8 4

                      Canadian Society for
              Computational Studies of Intelligence

                  University of Western Ontario
                         London, Ontario
                         May 18-20, 1984

     The Fifth National Conference of the CSCSI will be  held  at
the  University of Western Ontario in London, Canada.  Papers are
requested in all areas of AI research, particularly those  listed
below.  The Program Committee members responsible for these areas
are included.

Knowledge Representation :
   Ron Brachman (Fairchild R & D), John Mylopoulos (U of Toronto)
Learning :
   Tom Mitchell (Rutgers U), Jaime Carbonell (CMU)
Natural Language :
   Bonnie Weber (U of Pennsylvania), Ray Perrault (SRI)
Computer Vision :
   Bob Woodham (U of British Columbia), Allen Hanson (U Mass)
Robotics :
   Takeo Kanade (CMU), John Hollerbach (MIT)
Expert Systems and Applications :
   Harry Pople (U of Pittsburgh),  Victor  Lesser  (U  Mass)
Logic Programming :
   Randy Goebel (U of Waterloo), Veronica Dahl (Simon Fraser U)
Cognitive Modelling :
   Zenon Pylyshyn,  Ed  Stabler  (U  of Western Ontario)
Problem Solving and Planning :
   Stan Rosenschein (SRI), Drew McDermott (Yale)

     Authors are requested to prepare Full  papers,  of  no  more
than  4000  words in length, or Short papers of no more than 2000
words in length.  A full page of clear diagrams  counts  as  1000
words.   When  submitting,  authors must supply the word count as
well as the area in which they wish their paper reviewed.   (Com-
binations  of  the  above  areas are acceptable).  The Full paper
classification is intended for well-developed ideas, with  signi-
ficant demonstration of validity, while the Short paper classifi-
cation is intended for descriptions of research in progress.  Au-
thors  must  ensure that their papers describe original contribu-
tions to or novel applications of  Artificial  Intelligence,  re-
gardless of length classification, and that the research is prop-
erly compared and contrasted with relevant literature.
     Three copies of each submitted paper must be in the hands of
the  Program Chairman by December 7, 1983.  Papers arriving after
that date will be returned  unopened,  and  papers  lacking  word
count  and classifications will also be returned.  Papers will be
fully reviewed by appropriate members of the  program  committee.
Notice of acceptance will be sent on February 28, 1984, and final
camera ready versions are due on March 31,  1984.   All  accepted
papers will appear in the conference proceedings.

     Correspondence should be addressed  to  either  the  General
Chairman or the Program Chairman, as appropriate.

General Chairman                    Program Chairman

Ted Elcock,                         John K. Tsotsos
Dept. of Computer Science,          Dept. of Computer Science,
Engineering and Mathematical        10 King's College Rd.,
     Sciences Bldg.,                University of Toronto,
University of Western Ontario       Toronto, Ontario, Canada,
London, Ontario, Canada             M5S 1A4
N6A 5B9                             (416)-978-3619
(519)-679-3567

------------------------------

End of AIList Digest
********************

∂29-Sep-83  1438	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #66
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Sep 83  14:37:21 PDT
Date: Thursday, September 29, 1983 12:50PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #66
To: AIList@SRI-AI


AIList Digest            Friday, 30 Sep 1983       Volume 1 : Issue 66

Today's Topics:
  Rational Psychology - Definition,
  Halting Problem
  Natural Language Understanding
----------------------------------------------------------------------

Date: Tue 27 Sep 83 22:39:35-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Rational X

Oh dear! "Rational psychology" is no more about rational people than 
"rational mechanics" is about rational rocks or "rational 
thermodynamics" about rational hot air. "Rational X" is the 
traditional name for the mathematical, axiomatic study of systems 
inspired and intuitively related to the systems studied by the 
empirical science "X." Got it?

Fernando Pereira

------------------------------

Date: 27 Sep 83 11:57:24-PDT (Tue)
From: ihnp4!houxm!hogpc!houti!ariel!norm @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: ariel.463

Actually, the word "rational" in "rational psychology" is merely
redundant.  One would hope that psychology would be, as other
sciences, rational.  This would in no way detract from its ability to
investigate the causes of human irrationality.  No science really
should have to be prefaced with the word "rational", since we should
be able to assume that science is not "irrational".  Anyone for
"Rational Chemistry"?

Please note that the scientist's "flash of insight", "intuituion",
"creative leap" is heavily dependent upon the rational faculty, the
faculty of CONCEPT-FORMATION.  We also rely upon the rational faculty
for verifying and for evaluating such insights and leaps.

--Norm Andrews, AT&T Information Systems, Holmdel, New Jersey

------------------------------

Date: 26 Sep 83 13:01:56-PDT (Mon)
From: ihnp4!drux3!drufl!samir @ Ucb-Vax
Subject: Rational Psychology
Article-I.D.: drufl.670

Norm,

        Let me elaborate. Psychology, or logic of mind, involves BOTH 
rational and emotional processes. To consider one exclusively defeats 
the purpose of understanding.

        I have not read the article we are talking about so I cannot 
comment on that article, but an example of what I consider a "Rational
Psychology" theory is "Personal Construct Theory" by Kelly. It is an 
attractive theory but, in my opinion, it falls far short of describing
"logic of mind" as it fails to integrate emotional aspects.

        I consider learning-concept formation-creativity to have BOTH 
rational and emotional attributes, hence it would be better if we 
studied them as such.

        I may be creating a dichotomy where there is none. (Rational
vs.  Emotional). I want to point you to an interesting book "Metaphors
we live by" (I forget the names of Authors) which in addition to
discussing many other ai-related (without mentioning ai) concepts
discusses the question of Objective vs. Subjective, which is similar
to what we are talking here, Rational vs. Emotional.

        Thanks.

                                Samir Shah
                                AT&T Information Systems, Denver.
                                drufl!samir

------------------------------

Date: Tue, 27 Sep 1983  13:30 EDT
From: MINSKY@MIT-OZ
Subject: Re: Halting Problem

About learning:  There is a lot about how to get out of loops in my
paper "Jokes and the Cognitive Unconscious".  I can send it to whoever
wants, either over this net or by U.S. Snail.
 -- minsky

------------------------------

Date: 26 Sep 83 10:31:31-PDT (Mon)
From: ihnp4!clyde!floyd!whuxlb!pyuxll!eisx!pd @ Ucb-Vax
Subject: the Halting problem.
Article-I.D.: eisx.607

There are two AI problems that I know about: the computing power 
problem (combinatorial explosions, etc) and the "nature of thought"
problem (knowledge representation, reasoning process etc).  This
article concerns the latter.

AI's method (call it "m") seems to model human information processing
mechanisms, say legal reasoning methods, and once it is understood
clearly, and a calculus exists for it, programming it. This idea can
be transferred to various problem domains, and voila, we have programs
for "thinking" about various little cubbyholes of knowledge.

The next thing to tackle is, how do we model AI's method "m" that was 
used to create all these cubbyhole programs ?  How did whoever thought
of Predicate Calculus, semantic networks, Ad nauseum block world 
theories come up with them ? Let's understand that ("m"), formalize
it, and program it. This process (let's call it "m'") gives us a
program that creates cubbyhole programs. Yeah, it runs on a zillion 
acres of CMOS, but who cares.

Since a human can do more than just "m", or "m'", we try to make 
"m''", "m'''" et al. When does this stop ? Evidently it cannot.  The
problem is, the thought process that yields a model or simulation of a
thought process is necessarily distinct from the latter (This is true
of all scientific investigation of any kind of phenomenon, not just
thought processes). This distinction is one of the primary paradigms
of western Science.

Rather naively, thinking "about" the mind is also done "with" the
mind.  This identity of subject and object that ensues in the
scientific (dualistic) pursuit of more intelligent machine behavior - 
do you folks see it too ? Since scientific thought relies on the clear
separation of a theory/model and reality, is a
mathematical/scientific/engineering discipline inadequate for said 
pursuit ? Is there a system of thought that is self-describing ? Is 
there a non-dualistic calculus ?

What we are talking about here is the ability to separate oneself from
the object/concept/process under study, understand it, model it,
program it... it being anything, including the ability it self.  The
ability to recognize that a model is a representation within one's
mind of a reality outside of ones mind. Trying to model this ability
is leads one to infinite regress.  What is this ability ? Lets call it
conciousness.  What we seem to be coming up with here is, the
INABILITY of math/sci etc to deal with this phenomenon, codify at it,
and to boldly program a computer that has conciousness. Does this mean
that the statement:

"CONCIOUSNESS CAN, MUST, AND WILL ONLY COME TO EXISTENCE OF ITS OWN
ACCORD"

is true ? "Conciousness" was used for lack of a better word. Replace 
it by X, and you still have a significant statement. Conciousness
already has come to existence; and according to the line of reasoning
above, cannot be brought into existence by methods available.

If so, how can we "help" machines to achieve conciousness, as
benevolent if rather impotent observers ?  Should we just
mechanistically build larger and larger neural network simulators
until one says "ouch" when we shut a portion of it off, and better,
tries to deliberately modify(sic) its environment so that that doesn't
happen again? And may be even can split infinitives ?

As a parting shot, it's clear that such neural networks, must have 
tremendous power to come close to a fraction of our level of
abstraction ability.

Baffled, but still thinking...  References, suggestions, discussions, 
pointers avidly sought.

Prem Devanbu

ATTIS Labs , South Plainfield.

------------------------------

Date: 27 Sep 83 05:20:08 EDT (Tue)
From: rlgvax!cal-unix!wise@SEISMO
Subject: Natural Language Analysis and looping


A side light to the discussions of the halting problem is "what then?"
What do we do when a loop is detected?  Ignore the information?
Arbitrarily select some level as the *true* meaning?

In some cases, meaning is drawn from outside the language.  As an
example, consider a person who tells you, "I don't know a secret".
The person may really know a secret but doesn't want you to know, or
may not know a secret and reason that you'll assume that nobody with a
secret would say something so suspicious ...

A reasonable assumption would be that if the person said nothing,
you'd have no reason to think he knows a secret, so if that was the
assumption which he desired for you to make, he would just have kept
quiet, so you may conclude that the person knows no secret.

This rather simplistic example demonstrates one response to the loop,
i.e., when confronted with circular logic, we disregard it.  Another
possibility is that we may use external information to attempt to help
dis-ambiguate by selecting a level of the loop. (e.g. this is a
three-year-old, who is sufficiently unsophisticated that he may say
the above when he does, in fact, know a secret.)

This may support the study of cognition as an underpinning for NLP.  
Certainly we can never expect a machine to react as we (who is 'we'?)
do unless we know how we react.

------------------------------

Date: 28 Sep 1983 1723-PDT
From: Jay <JAY@USC-ECLC>
Subject: NLP, Learning, and knowledge rep.

As an undergraduate student here at USC, I am required to pass a 
Freshman Writting class.  I have noticed in this class that one field 
of the NL Problem is UNSOLVED even in humans.  I am speaking of the 
generation of prose.

In AI terms the problems are...

The selection of a small area of the knowledge base which is small 
enough to be written about in a few pages, and large enough that a 
paper can be generated at all.

One of the solutions to this problem is called 'clustering.'  In the 
middle of a page one draws a circle about the topic.  Then a directed 
graph is built by connecting associated ideas to nodes in the graph.  
Just free association does not seem to work very well, so it is 
sugested to ask a number of question, about the main idea, or any 
other node.  Some of the questions are What, Where, When, Why (and the
rest of the "Journalistic" q's), can you RELATE an incident about it, 
can you name its PARTS, can you describe a process to MAKE or do it.  
Finally this smaller data base is reduced to a few interesting areas.
This solution is then a process of Q and A on the data base to 
construct a smaller data base.

Once a small data base has been selected, it needs to be given a 
linear representation.  That is, it must be organized into a new data 
base that is suitable to prose.  There are no solutions offered for 
this step.

Finally the data base is coded into English prose.  There are no 
solutions offered for this step.

This prose is read back in, and compared to the original data base.  
Ambiguities need to be removed, some areas elaborated on, and others 
rewritten in a clearer style.  There are no solutions offered for this
step, but there are some rules - Things to do, and things not to do.

j'

------------------------------

Date: Tuesday, 27 September 1983 15:25:35 EDT
From: Robert.Frederking@CMU-CS-CAD
Subject: Re: NL argument between STLH and Pereira

Several comments in the last message in this exchange seemed worthy of
comment.  I think my basic sympathies lie with STLH, although he
overstates his case a bit.

While language is indeed a "fuzzy thing", there are different shades
of correctness, with some sentences being completely right, some with
one obvious *error*, which is noticed by the hearer and corrected,
while others are just a mess, with the hearer guessing the right
answer.  This is similar in some ways to error-correcting codes, where
after enough errors, you can't be sure anymore which interpretation is
correct.  This doesn't say much about whether the underlying ideal is
best expressed by a grammar.  I don't think it is, for NL, but the
reason has more to do with the fact that the categories people use in
language seem to include semantics in a rather pervasive way, so that
making a major distinction between grammatical (language-specific,
arbitrary) and other knowledge (semantics) might not be the best
approach.  I could go on at length about this (in fact I'm currently
working on a Tech Report discussing this idea), but I won't, unless
pressed.

As for ignoring human cognition, some AI people do ignore it, but
others (especially here at C-MU) take it very seriously.  This seems
to be a major division in the field -- between those who think the
best search path is to go for what the machine seems best suited for,
and those who want to use the human set-up as a guide.  It seems to me
that the best solution is to let both groups do their thing --
eventually we'll find out which path (or maybe both) was right.

I read with interest your description of your system -- I am currently
working on a semantic chart parser that sounds fairly similar to your
brief description, except that it is written in OPS5.  Thus I was
surprised at the statement that OPS5 has "no capacity for the
parallelism" needed.  OPS5 users suffer from the fact that there are
some fairly non-obvious but simple ways to build powerful data
structures in it, and these have not been documented.  Fortunately, a
production system primer is currently being written by a group headed
by Elaine Kant.  Anyway, I have an as-yet-unaccepted paper describing
my OPS5 parser available, if anyone is interested.

As for scientific "camps" in AI, part of the reason for this seems to
be the fact that AI is a very new science, and often none of the
warring factions have proved their points.  The same thing happens in
other sciences, when a new theory comes out, until it is proven or
disproven.  In AI, *all* the theories are unproven, and everyone gets
quite excited.  We could probably use a little more of the "both
schools of thought are probably partially correct" way of thinking,
but AI is not alone in this.  We just don't have a solid base of
proven theory to anchor us (yet).

In regard to the call for a theory which explains all aspects of
language behavior, one could answer "any Turing-equivalent computer".
The real question is, how *specifically* do you get it to work?  Any
claim like "my parser can easily be extended to do X" is more or less
moot, unless you've actually done it.  My OPS5 parser is embedded in a
Turing-equivalent production system language.  I can therefore
guarantee that if any computer can do language learning, so can my
program.  The question is, how?  The way linguists have often wanted
to answer "how" is to define grammars that are less than
Turing-equivalent which can do the job, which I suspect is futile when
you want to include semantics.  In any event, un-implemented
extensions of current programs are probably always much harder than
they appear to be.

(As an aside about sentences as fundamental structures, there is a
two-prong answer: (1) Sentences exist in all human languages.  They
appear to be the basic "frame" [I can hear nerves jarring all over the
place] or unit for human communication of packets of information.  (2)
Some folks have actually tried to define grammars for dialogue
structures.  I'll withhold comment.)

In short, I think warring factions aren't that bad, as long as they
all admit that no one has proven anything yet (which is definitely not
always the case), semantic chart parsing is the way to go for NL,
theories that explain all of cognitive science will be a long time in
coming, and that no one should accept a claim about AI that hasn't
been implemented.

------------------------------

End of AIList Digest
********************

∂29-Sep-83  1610	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #67
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Sep 83  16:09:36 PDT
Date: Thursday, September 29, 1983 12:56PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #67
To: AIList@SRI-AI


AIList Digest            Friday, 30 Sep 1983       Volume 1 : Issue 67

Today's Topics:
  Alvey Report & Fifth Generation,
  AI at Edinburgh - Reply,
  Machine Organisms - Desirability,
  Humor - Famous Flamer's School
----------------------------------------------------------------------

Date: 23 Sep 83 13:17:41-PDT (Fri)
From: decvax!genrad!security!linus!utzoo!watmath!watdaisy!rggoebel@Ucb-Vax
Subject: Re: Alvey Report and Fifth Generation
Article-I.D.: watdaisy.298

The ``Alvey Report'' is the popular name for the following booklet:

  A Programme for Advanced Information Technology
  The Report of the Alvey Committee

  published by the British Department of Industry, and available from
  Her Majesty's Stationery Office.  One London address is

    49 High Holborn
    London WC1V 6HB

The report is indeed interesting because it is a kind of response to
the Japanese Fifth Generation Project, but is is also interesting in
that it is not nearly so much the genesis of a new project as the
organization of existing potential for research and development.  The
quickest way to explain the point is that of the proposed 352 million
pounds that the report suggests to be spent, only 42 million is for
AI (Actually it's not for AI, but for IKBS-Intelligent Knowledge Based
Systems; seniors will understand the reluctance to use the word AI after
the Lighthill report).

The areas of proposed development include 1) Software engineering,
2) Man/Machine Interfaces, 3) IKBS, and 4) VLSI.  I have heard the
the most recent national budget in Britain has not committed the
funds expected for the project, but this is only rumor.  I would appreciate
further information (Can you help D.H.D.W.?).

On another related topic, I think it displays a bit of AI chauvinism
to believe that anyone, including the Japanese and the British
are so naive as to put all the eggs in one basket.

Incidently, I believe Feigenbaum and McCorduck's book revealed
at least two things: a disguised plea for more funding, and a not so
disguised expose of American engineering chauvinism.  Much of the American
reaction to the Japanese project sounds like the old cliches of
male chauvinism like ``...how could a women ever do the work of a real man?''
It just maybe that American Lisper's may end up ``eating quiche.'' 8-)

Randy Goebel
Logic Programming Group
University of Waterloo
UUCP: watmath!rggoebel

------------------------------

Date: Tue 27 Sep 83 22:31:28-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Re: U of Edinburgh, Scotland Inquiry

Since the Lighthill Report, a lot has changed for AI in Britain. The 
Alvey Report (British Department of Industry) and the Science and 
Engineering Research Council (SERC) initiative on Intelligent 
Knowledge-Based Systems (IKBS) have released a lot of money for 
Information Technology in general, and AI in particular (It remains to
be seen whether that huge amount of money -- 100s of millions -- is 
going to be spent wisely). The Edinburgh Department of AI has managed 
to get a substantial slice of that money. They have been actively 
looking for people both at lecturer and research associate/fellow 
level [a good opportunity for young AIers from the US to get to know 
Scotland, her great people and unforgetable Highlands].

The AI Dept. have recently added 3 (4?) new people to their teaching 
staff, and have more machines, research staff, and students than ever.
The main areas they work on are: Natural Language (Henry Thompson, 
Mark Steedman, Graeme Ritchie), controlled deduction and problem 
solving (Alan Bundy and his research assistant and students), Robotics
(Robin Popplestone, Pat Ambler and a number of others), LOGO-style 
stuff (Jim Howe [head of department] and Peter Ross) and AI languages 
(Robert Rae, Dave Bowen and others).  There are probably others I 
don't remember. The AI Dept.  is both on UUCP and on a network 
connected to ARPANET:

        <username>%edxa%ucl-cs@isid (ARPANET)
        ...!vax135!edcaad!edee!edai!<username> (UUCP)

I have partial lists of user names for both connections which I will
mail directly to interested persons.

Fernando Pereira SRI AI Center [an old Edinburgh hand]

pereira@sri-ai (ARPA) ...!ucbvax!pereira@sri-ai (UUCP)

------------------------------

Date: 24 Sep 83 3:54:20-PDT (Sat)
From: hplabs!hp-pcd!orstcs!hakanson @ Ucb-Vax
Subject: Machine Organisms? - (nf)
Article-I.D.: hp-pcd.1920


I was reading a novel recently, and ran across the following passage re-
lating to "intelligent" machines, robots, etc.  In case anyone is interested,
the book is Satan's World, by Poul Anderson, Doubleday 1969 (p. 132).
(I hope this article doesn't seem more appropriate to sf-lovers than to ai.)

        ... They had electronic speed and precision, yes, but not
        full decision-making capacity.  ... This is not for lack
        of mystic vital forces.  Rather, the biological creature
        has available to him so much more physical organization.
        Besides sensor-computer-effector systems comparable to
        those of the machine, he has feed-in from glands, fluids,
        chemistry reaching down to the molecular level -- the
        integrated ultracomplexity, the entire battery of
        *instincts* -- that a billion-odd years of ruthlessly
        selective evolution have brought forth.  He perceives and
        thinks with a wholeness transcending any possible symbolism;
        his purposes arise from within, and therefore are infinitely
        flexible.  The robot can only do what it was designed to
        do.  Self-programming has [can] extended these limits, to the
        point where actual consciousness may occur if desired.  But
        they remain narrower than the limits of those who made
        the machines.

Later in the book, the author describes a view that if a robot "were so
highly developed as to be equivalent to a biological organism, there
would be no point in building it."  This is explained as being true
because "nature has already provided us means for making new biological
organisms, a lot cheaper and more fun than producing robots."

I won't go on with the discussion in the book, as it degenerates into the
usual debate about the theoretical, fully motivated computer that is
superior in any way..., and how such a computer would rule the world, etc.
My point in posting the above passage was to ask the experts of netland
to give their opinions of the aforementioned views.

More specifically, how do we feel about the possibilities of building
machines that are "equivalent" to intelligent biological organisms?
Or even non-intelligent ones?  Is it possible?  And if so, why bother?

It's probably obvious that we don't need to disagree with the views given
by the author in order to want to continue with our studies in Artificial
Intelligence.  But how many of us do agree?  Disagree?

Marion Hakanson         {hp-pcd,teklabs}!orstcs!hakanson        (Usenet)
                        hakanson.oregon-state@rand-relay        (CSnet)
                        hakanson@{oregon-state,orstcs}          (also CSnet)

------------------------------

Date: Wed 28 Sep 83 17:18:53-PDT
From: Peter Karp <KARP@SUMEX-AIM>
Subject: Amusement from CMU's opinion bboard

   [Reprinted from the CMU opinion board via the SU-SCORE bboard.]


Ever dreamed of flaming with the Big Boys?  ...  Had that desire to
write an immense diatribe, berating de facto all your peers who hold
contrary opinions?  ...  Felt the urge to have your fingers moving
without being connected to your brain?  Well, by simply sending in the
form on the back of this bboard post, you could begin climbing into
your pulpit alongside greats from all walks of life such as Chomsky,
Weizenbaum, Reagan, Von Danneken, Ellison, Abzug, Arifat and many many
more.  You don't even have to leave the comfort of your armchair!

Here's how it works:  Each week we send you a new lesson.  You read
the notes and then simply write one essay each week on the assigned
topic.  Your essays will be read by our expert pool of professional
flamers and graded on Sparsity, Style, Overtness, Incoherence, and a
host of other important aspects.  You will receive a long letter from
your specially selected advisor indicating in great detail why you
obviously have the intellectual depth of a soap dish.  This
apprenticeship is all there is to it.

Here are some examples of the courses offered by The School:

        Classical Flames:  You will study the flamers who started it 
all.  For example, Descarte's much quoted demonstration that reality 
isn't.  Special attention is paid, in this course, to the old and new 
testaments and how western flaming was influenced by their structure.
(The Bible plays a particularly important role in our program and most
courses will spend at least some time tracing biblical origins or 
associations of their special topic.  See, particularly, the special 
seminar on Space Cadetism, which concentrate on ESP and UFO
phenomena.)

        Contemporary Flame Technique:  Attention is paid to the detail
of flame form in this course.  The student will practice the subtle
and overt ad hominem argument; fact avoidance maneuvers; "at length" 
writing style; over generalization; and other important factor which 
make the modern flame inaccessible to the general populace.  Readings 
from Russell ("Now I will admit that some unusually stupid children of
ten may find this material a bit difficult to fathom..."), Skinner, 
(primarily concentrating on his Verbal Learning), Sagan (on abstract 
overestimation) and many others.  This course is most concerned with 
politicians (sometimes, redundantly, referred to as "political
flamers") since their speech writers are particularly adept at the
technique that we wish to foster.

        Appearing Brilliant (thanks to the Harvard Lampoon): Nobel
laureates lecture on topics of world import but which are very much
outside their field of expertise.  There is a large representation of
Nobels in physics:  the discoverer of the UnCharmed Pi Mesa Beta Quark
explains how the population explosion can be averted through proper
reculterization of mothers; and professor Nikervator, first person to
properly develop the theory of faster- than-sound "Whizon" docking
coreography, tells us how mind is the sole theological entity.

        Special seminar in terminology:  The name that you give 
something is clearly more important than its semantics.  Experts in 
nomenclature demonstrate their skills.  Pulitzer Prize winner Douglas 
Hofstader makes up 15,000 new words whose definitions, when read 
sideways prove the existence of themselves and constitute fifteen
months of columns in Scientific American.  A special round table of
drug company and computer corporation representatives discuss how to
construct catchy names for new products and never give the slightest
hint to the public about what they mean.

        Writing the Scientific Journal Flame: Our graduates will be
able to compete in the modern world of academic and industrial
research flaming, where the call is high for trained pontificators.
the student reads short sections from several fields and then may
select a field of concentration for detailed study.

Here is an example description of a detailed scientific flaming
seminar:

        Computer Science: This very new field deals directly with the 
very metal of the flamer's tools: information and communication.  The
student selecting computer science will study several areas including,
but not exclusively:

    Artificial Intelligence: Roger Schank explains the design of
    his flame understanding and generation engine (RUSHIN) and
    will explain how the techniques that it employs constitute a
    complete model of mind, brain, intelligence, and quantum
    electrodynamics.  For contrast, Marvin Minsky does the same.
    Weizenbaum tells us, with absolutely no data or alternative
    model, why AI is logically impossible, and moreover,
    immoral.

    Programming Languages: A round table is held between Wirth,
    Hoare, Dykstra, Iverson, Perlis, and Jean Samett, in order
    to keep them from killing each other.

    Machines and systems: Fred Brooks and Gordon Bell lead a
    field of experts over the visual cliff of hardware
    considerations.

The list of authoritative lectures goes on and on.  In addition, an 
inspiring introduction by Feigenbaum explains how important it is that
flame superiority be maintained by the United States in the face of
the recent challenges from Namibia and the Panama Canal zone.

But there's more.  Not only will you read famous flamers in abundance,
but you will actually have the opportunity to "run with the pack".
The Famous Flamer's School has arranged to provide access for all
computer science track students, to the famous ARPANet where students
will be able to actually participate in discussions of earthshaking
current importance, along with the other brilliant young flamers using
this nationwide resource.  You'll read and write about whether
keyboards should have a space bar across the whole bottom or split
under the thumbs; whether or not Emacs is God, and which deity is the
one true editor; whether the brain actually cools the body or not;
whether the earth revolves around the sun or vice versa -- and much
more.  You contributions will be whisked across the nation, faster
than throwing a 2400 foot magtape across the room, into the minds of
thousands of other electrolusers whose brain cells will merge with
yours for the moment that they read your personal opinion of matters
of true science!  What importance!

We believe that the program we've constructed is very special and will
provide, for the motivated student, an atmosphere almost completely 
content free in which his or her ideas can flow in vacuity.  So, take 
the moment to indicate your name, address, age, and hat size by
filling out the rear of this post and mailing it to:

        FAMOUS FLAMER'S SCHOOL
        c/o Locker number 6E
        Grand Central Station North
        New York, NY.

Act now or forever hold your peace.

------------------------------

End of AIList Digest
********************

∂03-Oct-83  1104	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #68
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Oct 83  11:03:40 PDT
Date: Monday, October 3, 1983 9:33AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #68
To: AIList@SRI-AI


AIList Digest             Monday, 3 Oct 1983       Volume 1 : Issue 68

Today's Topics:
  Humor - Famous Flamer's School Credit,
  Technology Transfer & Research Ownership,
  AI Reports - IRD & NASA,
  TV Coverage - Computer Chronicles,
  Seminars - Ullman, Karp, Wirth, Mason,
  Conferences - UTexas Symposium & IFIP Workshop
----------------------------------------------------------------------

Date: Mon 3 Oct 83 09:29:16-PDT From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Famous Flamer's School -- Credit

The Famous Flamer's School was created by Jeff.Shrager@CMU-CS-A; my
apologies for not crediting him in the original article.  If you
saved or distributed a copy, please add a note crediting Jeff.

					-- Ken Laws

------------------------------

Date: Thu 29 Sep 83 17:58:29-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Alas, I must flame...

[ I hate to flame, but here's an issue that really got to me...]

From the call for papers for the "Artificial Intelligence and Machines":

    AUTHORS PLEASE NOTE:  A Public Release/Sensitivity Approval is necessary.
    Authors from DOD, DOD contractors, and individuals whose work is government
    funded must have their papers reviewed for public release and more
    importantly sensitivity (i.e. an operations security review for sensitive
    unclassified material) by the security office of their sponsoring agency.

  How much AI work does *NOT* fall under one of the categories "Authors from
DOD, DOD contractors, and individuals whose work is government funded" ?
I read this to mean that essentially any government involvement with
research now leaves one open to goverment "protection".

  At issue here is not the goverment duty to safeguard classified materials;
it is the intent of the government to limit distribution of non-military
basic research (alias "sensitive unclassified material"). This "we paid for
it, it's OURS (and the Russians can't have it)" mentality seems the rule now.

  But isn't science supposed to be for the benefit of all mankind,
and not just another economic bargaining chip? I cannot help but to
be chilled by this divorce of science from a higher moral outlook.
Does it sound old fashioned to believe that scientific thought is
part of a common heritage, to be used to improve the lives of all? A
far as I can see, if all countries in the world follow the lead of
the US and USSR toward scientific protectionism, we scientists will
have allowed science to abandon its primary role toward learning
about ourselves and become a mere intellectual commodity.

David Rogers
DRogers@SUMEX-AIM.ARPA

------------------------------

Date: Fri 30 Sep 83 10:09:08-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: IRD Report


         [Reprinted from IEEE Computer, Sep. 1983, p. 116.]


             Rapid Growth Predicted for AI-Based System

Expert systems are now moving out of the research laboratory and into
the commercial marketplace, according to "Artificial Intelligence,"
a 167-page research report from International Resource Development.
Revenue from all AI hardware, software, and services will amount to
only $70 million this year but is expected to reach $8 billion
in the next 10 years.

Biomedical applications promise to be among the fastest growing
uses of AI, reducing the time and cost of diagnosing illnesses and
adding to the accuracy of diagnoses.  AI-based systems can range
from "electronic encyclopedias," which physicians can use as
reference sources, to full-fledged "electronic consultants"
capable of taking a patient through an extensive series of diagnostic
tests and determining the patient's ailments with great precision.

"Two immediate results of better diagnostic procedures may be a
reduction in the number of unnecessary surgical procedures performed
on patients and a decrease in the average number of expensive tests
performed on patients," predicts Dave Ledecky of the IRD research
staff.  He also notes that the AI technology may leave hospitals
half-empty, since some operations turn out to be unnecessary.
However, he expects no such dramatic result anytime soon, since
widespread medical application of AI technology isn't expected for
about five years.

The IRD report also describes the activities of several new companies
that are applying AI technology to medical systems.  Helena Laboratories
in Beaumont, Texas, is shipping a densitometer/analyzer, which
includes a serum protein diagnostic program developed by Rutgers
University using AI technology.  Still in the development stage
are the AI-based products of IntelliGenetics in Palo Alto,
California, which are based on work conducted at Stanford University
over the last 15 years.

Some larger, more established companies are also investing in AI
research and development.  IBM is reported to have more than five
separate programs underway, while Schlumberger, Ltd., is
spending more than $5 million per year on AI research, much of
which is centered on the use of AI in oil exploration.

AI software may dominate the future computer industry, according to
the report, with an increasing percentage of applications
programming being performed in Lisp or other AI-based "natural"
languages.

Further details on the $1650 report are available from IRD,
30 High Street, Norwalk, CT 06851; (800) 243-5008,
Telex: 64 3452.

------------------------------

Date: Fri 30 Sep 83 10:16:43-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: NASA Report


[Reprinted from IEEE Spectrum, Oct. 1983, p. 78]


Overview Explains AI

A technical memorandum from the National Aeronautics and
Space Administration offers an overview of the core ingredients
of artificial intelligence.  The volume is the first in a series
that is intended to cover both artificial intelligence and
robotics for interested engineers and managers.

The initial volume gives definitions and a short history entitled
"The rise, fall, and rebirth of AI" and then lists applications,
principal participants in current AI work, examples of the
state of the art, and future directions.  Future volumes in AI
will cover application areas in more depth and will also cover
basic topics such as search-oriented problem-solving and
planning, knowledge representation, and computational logic.

The report is available from the National Technical Information
Service, Springfield, Va. 22161.  Please ask for NASA Technical
Memorandum Number 85836.

------------------------------

Date: Thu 29 Sep 83 20:13:09-PDT
From: Ellie Engelmore <EENGELMORE@SUMEX-AIM>
Subject: TV documentary

                [Reprinted from the SU-SCORE bboard.]


KCSM-TV Channel 60 is producing a series entitled "The Computer
Chronicles".  This is a series of 30-minute programs intended to be a
serious look at the world of computers, a potential college-level
teaching device, and a genuine historical document.  The first episode
in the series (with Don Parker discussing computer security) will be
broadcast this evening...Thursday, September 29...9pm.

The second portion of the series, to be broadcast 9 pm Thursday,
October 6, will be on the subject of Artificial Intelligence (with Ed
Feigenbaum).

------------------------------

Date: Thu 29 Sep 83 19:03:27-PDT
From: Andrei Broder <Broder@SU-SCORE.ARPA@SU-Score>
Subject: AFLB

                [Reprinted from the SU-SCORE bboard.]


The "Algorithms  for  Lunch  Bunch"  (AFLB) is  a  weekly  seminar  in
analysis  of  algorithms  held   by  the  Stanford  Computer   Science
Department, every Thursday, at 12:30 p.m., in Margaret Jacks Hall, rm.
352.

At the first meeting this year, (Thursday, October 6) Prof. Jeffrey D.
Ullman, from Stanford,  will talk on  "A time-communication  tradeoff"
Abstract follows.

Further  information  about   the  AFLB  schedule   is  in  the   file
[SCORE]<broder>aflb.bboard .

If you want to  get abstracts of  the future talks,  please send me  a
message to put you on the AFLB mailing list.  If you just want to know
the title of the  next talk and  the name of the  speaker look at  the
weekly Stanford CSD  schedule that  is (or  should be)  sent to  every
bboard.
                      ------------------------

10/6/83 - Prof. Jeffrey D. Ullman (Stanford):

                       "A time-communication  tradeoff"

We examine how multiple  processors could share  the computation of  a
collection of values  whose dependencies  are in  the fom  of a  grid,
e.g., the estimation of nth derivatives.  Two figures of merit are the
time t the shared computation takes and the amount of communication c,
i.e., the number of values that  are either inputs or are computed  by
one processor and  used by another.   We prove that  no matter how  we
share the responsibility for computing  an n by n  grid, the law ct  =
OMEGA(n↑3) must hold.

******** Time and place: Oct. 6, 12:30 pm in MJ352 (Bldg. 460) *******

------------------------------

Date: Thu 29 Sep 83 09:33:24-CDT
From: CS.GLORIA@UTEXAS-20.ARPA
Subject: Karp Colloquium, Oct. 13, 1983

               [Reprinted from the UTexas-20 bboard.]


Richard M. Karp, University of California at Berkeley, will present a talk
entitled, "A Fast Parallel Algorithm for the Maximal Independent Set Problem"
on Thursday, October 13, 1983 at 3:30 p.m. in Painter Hall 4.42.  Coffee
at 3 p.m. in PAI 3.24.
Abstract:
     One approach to understanding the limits of parallel computation is to
search for problems for which the best parallel algorithm is not much faster
than the best sequential algorithm.  We survey what is known about this
phenomenon and show that--contrary to a popular conjecture--the problem of
finding a maximal inependent set of vertices in a graph is highly amenable
to speed-up through parallel computation.  We close by suggesting some new
candidates for non-parallelizable problems.

------------------------------

Date: Fri 30 Sep 83 21:39:45-PDT
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: N. Wirth, Colloquium 10/4/83

                [Reprinted from the SU-SCORE bboard.]


CS COLLOQUIUM:  Niklaus Wirth will be giving the
opening colloquium of this quarter on Tuesday (Oct. 4),
at 4:15 in Terman Auditorium.  His talk is titled
"Reminiscences and Reflections".  Although there is
no official abstract, in discussing this talk with him
I learned that Reminiscences refer to his days here at
Stanford one generation ago, and Reflections are on
the current state of both software and hardware, including
his views on what's particularly good and bad in the
current research in each area.  I am looking forward to
this talk, and invite all members of our department,
and all interested colleagues, to attend.

Professor Wirth's talk will be preceded by refreshments
served in the 3rd floor lounge (in Margaret Jacks Hall)
at 3:45.  Those wishing to schedule an appointment with
Professor Wirth should contact ELYSE@SCORE.

------------------------------

Date: 30 Sep 83  1049 PDT
From: Carolyn Talcott <CLT@SU-AI>
Subject: SEMINAR IN LOGIC AND FOUNDATIONS

                [Reprinted from the SU-SCORE bboard.]


Organizational and First Meeting

Time: Wednesday, Oct. 5, 4:15-5:30 PM

Place:  Mathematics Dept. Faculty Lounge, 383N Stanford

Speaker: Ian Mason

Title: Undecidability of the metatheory of the propositional calculus.

   Before the talk there will be a discussion of plans for the seminar
this fall.
                       S. Feferman


[PS - If you read this notice on a bboard and would like to be on the
distribution list send me a message.  - CLT@SU-AI]

------------------------------

Date: Thu 29 Sep 83 14:24:36-CDT
From: Clive Dawson <CC.Clive@UTEXAS-20.ARPA>
Subject: Schedule for C.S. Dept. Centennial Symposium

               [Reprinted from the UTexas-20 bboard.]


                        COMPUTING AND THE INFORMATION AGE

                             October 20 & 21, 1983

                        Joe C. Thompson Conference Center

Thursday, Oct. 20
-----------------

8:30    Welcoming address - A. G. Dale (UT Austin)
                            G. J. Fonken, VP for Acad. Affairs and Research

9:00    Justin Rattner (Intel)
        "Directions in VLSI Architecture and Technology"

10:00   J. C. Browne (UT Austin)

10:15   Coffee Break

10:45   Mischa Schwartz (Columbia)
        "Computer Communications Networks: Past, Present and Future"

11:45   Simon S. Lam (UT Austin)

12:00   Lunch

2:00    Herb Schwetman (Purdue)
        "Computer Performance: Evaluation, Improvement, and Prediction"

3:00    K. Mani Chandy (UT Austin)

3:15    Coffee Break

3:45    William Wulf (Tartan Labs)
        "The Evolution of Programming Languages"

4:45    Don Good (UT Austin)

Friday, October 21
------------------

8:30    Raj Reddy (CMU)
        "Supercomputers for AI"

9:30    Woody Bledsoe (UT Austin)

9:45    Coffee Break

10:15   John McCarthy (Stanford)
        "Some Expert Systems Require Common Sense"

11:15   Robert S. Boyer and J Strother Moore (UT Austin)

11:30   Lunch

1:30    Jeff Ullman (Stanford)
        "A Brief History of Achievements in Theoretical Computer Science"

2:30    James Bitner (UT Austin)

2:45    Coffee Break

3:15    Cleve Moler (U. of New Mexico)
        "Mathematical Software -- The First of the Computer Sciences"

4:15    Alan Cline (UT Austin)

4:30    Summary - K. Mani Chandy, Chairman, Dept. of Computer Sciences

------------------------------

Date: Sunday, 2 October 1983 17:49:13 EDT
From: Mario.Barbacci@CMU-CS-SPICE
Subject: Call For Participation -- IFIP Workshop

                            CALL FOR PARTICIPATION
                IFIP Workshop on Hardware Supported Implementation of
                    Concurrent Languages in Distributed Systems
                        March 26-28, 1984, Bristol, U.K.

TOPICS:
- the impact of distributed computing languages and compilers on the
                architecture of distributed systems.
- operating systems; centralized/decentralized control, process
                communications and synchronization, security
- hardware design and interconnections
- hardware/software interrelation and trade offs
- modelling, measurements, and performance

Participation is by INVITATION ONLY, if you are interested in attending this
workshop write to the workshop chairman and include an abstract (1000 words
approx.) of your proposed contribution.

Deadline for Abstracts: November 15, 1983
Workshop Chairman:      Professor G.L. Reijns
                        Chairman, IFIP Working Group 10.3
                        Delft University of Technology
                        P.O. Box 5031
                        2600 GA Delft
                        The Netherlands

------------------------------

End of AIList Digest
********************

∂03-Oct-83  1255	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #69
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Oct 83  12:51:38 PDT
Date: Monday, October 3, 1983 9:50AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #69
To: AIList@SRI-AI


AIList Digest             Monday, 3 Oct 1983       Volume 1 : Issue 69

Today's Topics:
  Rational Psychology - Examples,
  Organization - Reflexive Reasoning & Conciousness & Learning & Parallelism
----------------------------------------------------------------------

Date: Thu, 29 Sep 83 18:29:39 EDT
From: "John B. Black" <Black@YALE.ARPA>
Subject: "Rational Psychology"


     Recently on this list, Pereira held up as a model for us all, Doyle's
"Rational Psychology" article in AI Magazine.  Actually, I think what Pereira
is really requesting is a reduction of overblown claims and assertions with no
justification (e.g., "solutions" to the natural language problem).  However,
since he raised the "rational psychology" issue I though I would comment on it.

     I too read Doyle's article with interest (although it seemed essentially
the same as Don Norman's numerous calls for a theoretical psychology in the
early 1970s), but (like the editor of this list) I was wondering what the
referents were of the vague descriptions of "rational psychology."  However,
Doyle does give some examples of what he means: mathematical logic and
decision theory, mathematical linguistics, and mathematical theories of
perception.  Unfortunately, this list is rather disappointing because --
with the exception of the mathematical theories of perception -- they have
all proved to be misleading when actually applied to people's behavior.

     Having a theoretical (or "rational" -- terrible name with all the wrong
connotations) psychology is certainly desirable, but it does have to make some
contact with the field it is a theory of.  One of the problems here is that
the "calculus" of psychology has yet to be invented, so we don't have the tools
we need for the "Newtonian mechanics" of psychology.  The latest mathematical
candidate was catastrophe theory, but it turned out to be a catastrophe when
applied to human behavior.  Perhaps Periera and Doyle have a "calculus"
to offer.

     Lacking such a appropriate mathematics, however, does not stop a
theoretical psycholology from existing.  In fact, I offer three recent examples
of what a theoretical psychology ought to be doing at this time:

 Tversky, A.  Features of similarity.  PSYCHOLOGICAL REVIEW, 1977, 327-352.

 Schank, R.C.  DYNAMIC MEMORY.  Cambridge University Press, 1982.

 Anderson, J.R.  THE ARCHITECTURE OF COGNITION.  Harvard University Press, 1983.

------------------------------

Date: Thu 29 Sep 83 19:03:40-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Self-description, multiple levels, etc.

For a brilliant if tentative attack of the questions noted by
Prem Devanbu, see Brian Smith's thesis "Reflection and Semantics
in a Procedural Language,", MIT/LCS/TR-272.

Fernando Pereira

------------------------------

Date: 27 Sep 83 22:25:33-PDT (Tue)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: reflexive reasoning ? - (nf)
Article-I.D.: uiucdcs.3004


I believe the pursuit of "consciousness" to be complicated by the difficulty
of defining what we mean by it (to state the obvious). I prefer to think in
less "spiritual" terms, say starting with the ability of the human memory to
retain impressions for varying periods of time. For example, students cramming
for an exam can remember long lists of things for a couple of hours -- just
long enough -- and forget them by the end of the same day. Some thoughts are
almost instantaneously lost, others last a lifetime.

Here's my suggestion: let's start thinking in terms of self-observation, i.e.
the construction of models to explain the traces that are left behind by things
we have already thought (and felt?). These models will be models of what goes
on in the thought processes, can be incorrect and incomplete (like any other
model), and even reflexive (the thoughts dedicated to this analysis leave
their own traces, and are therefore subject to modelling, creating notions
of self-awareness).

To give a concrete (if standard) example: it's quite reasonable for someone
to say to us, "I didn't know that." Or again, "Oh, I just said it, what was
his name again ... How can I be so forgetful!"

This leads us into an interesting "problem": the fading of human memory with
time. I would not be surprized if this was actually desirable, and had to be
emulated by computer. After all, if you're going to retain all those traces
of where a thought process has gone; traces of the analysis of those traces,
etc; then memory would fill up very quickly.

I have been thinking in this direction for some time now, and am working on
a programming language which operates on several of the principles stated
above. At present the language is capable of responding dynamically to any
changes in problem state produced by other parts of the program, and rules
can even respond to changes induced by themselves. Well, that's the start;
the process of model construction seems to me to be by far the harder part
of the task.

It becomes especially interesting when you think about modelling what look
like "levels" of self-awareness, but could actually be manifestations of just
one mechanism: traces of some work, which are analyzed, thus leaving traces
of self-analysis; which are analyzed ... How are we to decide that the traces
being analyzed are somehow different from the traces of the analysis? Even
"self-awareness" (as opposed to full-blown "consciousness") will be difficult
to understand. However, at this point I am convinced that we are not dealing
with a potential for infinite regress, but with a fairly simple mechanism
whose results are hard to interpret. If I am right, we may have some thinking
to do about subject-object distinctions.

In case you're interested in my programming language, look for some papers due
to appear shortly:

        Logic-Programming Production Systems with METALOG.  Software Practice
           and Experience, to appear shortly.

        METALOG: a Language for Knowledge Representation and Manipulation.
           Conf on AI (April '83).

Of course, I don't say that I'm thinking about "self-awareness" as a long-term
goal (my co-author isn't) ! If/when such a goal becomes acceptable to the AI
community it will probably be called something else. Doesn't "reflexive
reasoning" sound more scientific?.

                                Marcel Schoppers,
                                Dept of Comp Sci,
                                U of Illinois @ Urbana-Champaign
                                uiucdcs!marcel

------------------------------

Date: 27 Sep 83 19:24:19-PDT (Tue)
From: decvax!genrad!security!linus!philabs!cmcl2!floyd!vax135!ariel!ho
      u5f!hou5e!hou5d!mat@Ucb-Vax
Subject: Re: the Halting problem.
Article-I.D.: hou5d.674

I may be naive, but it seems to me that any attempt to produce a system that
will exhibit conciousness-;like behaviour will require emotions and the
underlying base that they need and supply.  Reasoning did not evolve
independently of emotions; human reason does not, in my opinion, exist
independently of them.

Any comments?  I don't recall seeing this topic discussed.  Has it been?  If
not, is it about time to kick it around?
                                                Mark Terribile
                                                hou5d!mat

------------------------------

Date: 28 Sep 83 12:44:39-PDT (Wed)
From: ihnp4!drux3!drufl!samir @ Ucb-Vax
Subject: Re: the Halting problem.
Article-I.D.: drufl.674

I agree with mark. An interesting book to read regarding conciousness is
"The origin of conciousness in the breakdown of bicamaral mind" by
Julian Jaynes. Although I may not agree fully with his thesis, it did
get me thinking and questioning about the usual ideas regarding
conciousness.

An analogy regarding conciousness, "emotions are like the roots of a
plant, while conciousness is the fruit".

                                Samir Shah
                                AT&T Information Systems, Denver.
                                drufl!samir

------------------------------

Date: 30 Sep 83 13:42:32 EDT
From: BIESEL@RUTGERS.ARPA
Subject: Recursion of reperesentations.


Some of the more recent messages have questioned the possibility of
producing programs which can "understand" and "create" human discourse,
because this kind of "understanding" seems to be based upon an infinite
kind of recursion. Stated very simply, the question is "how can the human
mind understand itself, given that it is finite in capacity?", which
implies that humans cannot create a machine equivalent of a human mind,
since (one assumes) that underatnding is required before construction
becomes possible.

There are two rather simple objections to this notion:
        1) Humans create minds every day, without understanding
           anything about it. Just some automatic biochemical
           machinery, some time, and exposure to other minds
           does the trick for human infants.

        2) John von Neumann, and more recently E.F. Codd
           demostrated in a very general way the existence
           of universal constructors in cellular automata.
           These are configurations in cellular space which
           able to construct any configuration, including
           copies of themselves, in finite time (for finite
           configurations)

No infinite recursion is involved in either case, nor is "full"
understanding required.

I suspect that at some point in the game we will have learned enough about
what works (in a primarily empirical sense) to produce machine intelligence.
In the process we will no doubt learn a lot about mind in general, and our
own minds in particular, but we will still not have a complete understanding
of either.

Peolpe will continue to produce AI programs; they will gradually get better
at various tasks; others will combine various approaches and/or programs to
create systems that play chess and can talk about the geography of South
America; occasionally someone will come up with an insight and a better way
to solve a sub-problem ("subjunctive reference shift in frame-demon
instantiation shown to be optimal for linearization of semantic analysis
of noun phrases" IJCAI 1993); lay persons will come to take machine intelligence
for granted; AI people will keep searching for a better definition of
intelligence; nobody will really believe that machines have that indefinable
something (call it soul, or whatever) that is essential for a "real" mind.

                        Pete Biesel@Rutgers.arpa

------------------------------

Date: 29 Sep 83 14:14:29 EDT
From: SOO@RUTGERS.ARPA
Subject: Top-Down? Bottom-Up?

                [Reprinted from the Rutgers bboard.]


 I happen to read a paper by Michael A. Arbib about brain theory.
 The first section of which is "Brain Theory: 'Bottom-up' and
 'Top-Down'" which I think will shed some light on our issue of
 top-down and bottom-up approach in machine learning seminar.
 I would like to quote several remarks from the brain theorist
 view point to share with those interesed:

"    I want to suggest that brain theory should confront the 'bottom-up'
analyses of neural modellling no only with biological control theory but
also with the 'top-down' analyses of artificial intelligence and congnitive
psychology. In bottom-up analyses, we take components of known function, and
explore ways of putting them together to synthesize more and more complex
systems. In top-down analyses, we start from some complex functional behavior
that interests us, and try to determine what are natural subsystems into which
we can decompose a system that performs in the specified way.  I would argue
that progress in brain theory will depend on the cyclic interaction of these
two methodologies. ..."


"  The top-down approach complement bottom-up studies, for one cannot simply
wait until one knows all the neurons are and how they are connected to then
simulate the complete system. ..."

I believe that the similar philosophy applies to the machine learning study
too.

For those interested, the paper can be found in COINS techical report 81-31
by M. A. Arbib "A View of Brain Theory"


Von-Wun,

------------------------------

Date: Fri, 30 Sep 83 14:45:55 PDT
From: Rik Verstraete <rik@UCLA-CS>
Subject: Parallelism and Physiology

I would like to comment on your message that was printed in AIList Digest
V1#63, and I hope you don't mind if I send a copy to the discussion list
"self-organization" as well.

        Date: 23 Sep 1983 0043-PDT
        From: FC01@USC-ECL
        Subject: Parallelism

        I thought I might point out that virtually no machine built in the
        last 20 years is actually lacking in parallelism. In reality, just as
        the brain has many neurons firing at any given time, computers have
        many transistors switching at any given time. Just as the cerebellum
        is able to maintain balance without the higher brain functions in the
        cerebrum explicitly controlling the IO, most current computers have IO
        controllers capable of handling IO while the CPU does other things.

The issue here is granularity, as discussed in general terms by E. Harth
("On the Spontaneous Emergence of Neuronal Schemata," pp. 286-294 in
"Competition and Cooperation in Neural Nets," S. Amari and M.A. Arbib
(eds), Springer-Verlag, 1982, Lecture Notes in Biomathematics # 45).  I
certainly recommend his paper.  I quote:

One distinguishing characteristic of the nervous system is
thus the virtually continuous range of scales of tightly
intermeshed mechanisms reaching from the macroscopic to the
molecular level and beyond.  There are no meaningless gaps
of just matter.

I think Harth has a point, and applying his ideas to the issue of parallel
versus sequential clarifies some aspects.

The human brain seems to be parallel at ALL levels.  Not only is a large
number of neurons firing at the same time, but also groups of neurons,
groups of groups of neurons, etc. are active in parallel at any time.  The
whole neural network is a totally parallel structure, at all levels.

You pointed out (correctly) that in modern electronic computers a large
number of gates are "working" in parallel on a tiny piece of the problem,
and that also I/O and CPU run in parallel (some systems even have more than
one CPU).  However, the CPU itself is a finite state machine, meaning it
operates as a time-sequence of small steps.  This level is inherently
sequential.  It therefore looks like there's a discontinuity between the
gate level and the CPU/IO level.

I would even extend this idea to machine learning, although I'm largely
speculating now.  I have the impression that brains not only WORK in
parallel at all levels of granularity, but also LEARN in that way.  Some
computers have implemented a form of learning, but it is almost exclusively
at a very high level (most current AI on learning work is at this level),
or only at a very low level (cf. Perceptron).  A spectrum of adaptation is
needed.

Maybe the distinction between the words learning and self-organization is
only a matter of granularity too. (??)

        Just as people have faster short term memory than long term memory but
        less of it, computers have faster short term memory than long term
        memory and use less of it. These are all results of cost/benefit
        tradeoffs for each implementation, just as I presume our brains and
        bodies are.

I'm sure most people will agree that brains do not have separate memory
neurons and processing neurons or modules (or even groups of neurons).
Memory and processing is completely integrated in a human brain.
Certainly, there are not physically two types of memories, LTM and STM.
The concept of LTM/STM is only a paradigm (no doubt a very useful one), but
when it comes to implementing the concept, there is a large discrepancy
between brains and machines.

        Don't be so fast to think that real computer designers are
        ignorant of physiology.

Indeed, a lot of people I know in Computer Science do have some idea of
physiology.  (I am a CS major with some background in neurophysiology.)
Furthermore, much of the early CS emerged from neurophysiology, and was an
explicit attempt to build artificial brains (at a hardware/gate level).
However, although "real computer designers" may not be ignorant of
physiology, it doesn't mean that they actually manage to implement all the
concepts they know.  We still have a long way to go before we have
artificial brains...

        The trend towards parallelism now is more like
        the human social system of having a company work on a problem. Many
        brains, each talking to each other when they have questions or
        results, each working on different aspects of a problem. Some people
        have breakdowns, but the organization keeps going. Eventually it comes
        up with a product, although it may not really solve the problem posed
        at the beginning, it may have solved a related problem or found a
        better problem to solve.

Again, working in parallel at this level doesn't mean everything is
parallel.

                Another copyrighted excerpt from my not yet finished book on
        computer engineering modified for the network bboards, I am ever
        yours,
                                                Fred


All comments welcome.

        Rik Verstraete <rik@UCLA-CS>

PS: It may sound like I am convinced that parallelism is the only way to
go.  Parallelism is indeed very important, but still, I believe sequential
processing plays an important role too, even in brains.  But that's a
different issue...

------------------------------

End of AIList Digest
********************

∂03-Oct-83  1907	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #70
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Oct 83  19:06:31 PDT
Date: Monday, October 3, 1983 5:38PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #70
To: AIList@SRI-AI


AIList Digest            Tuesday, 4 Oct 1983       Volume 1 : Issue 70

Today's Topics:
  Technology Transfer & Research Ownership - Clarification,
  AI at Edinburgh - Description
----------------------------------------------------------------------

Date: Mon 3 Oct 83 11:55:41-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: recent flame

    I would like to clarify my recent comments on the disclaimer published
with the conference announcement for the "Intelligent Systems and Machines"
conference to be given at Oakland University. I did not mean to suggest
that the organizers of this particular conference are the targets of my
criticism; indeed, I congratulate them for informing potential attendees
of their obligations under the law. I sincerely apologize for not making
this obvious in my original note.

    I also realize that most conferences will have to deal with this issue
in the future, and meant my message not as a "call to action", but rather,
as a "call to discussion" of the proper role of goverment in AI and science
in general. I believe that we should follow these rules, but should
also participate in informed discussion of their long-range effect and
direction.

Apologies and regards,

David Rogers
DRogers@SUMEX-AIM.ARPA

------------------------------

Date: Friday, 30-Sep-83  14:17:58-BST
From: BUNDY HPS (on ERCC DEC-10) <bundy@edxa>
Reply-to: bundy@rutgers.arpa
Subject: Does Edinburgh AI exist?


        A while back someone in your digest asked whether the AI
dept at Edinburgh still exists. The short answer is yes it flourishes.
The long answer is contained in the departmental description that follows.
                Alan Bundy

------------------------------

Date: Friday, 30-Sep-83  14:20:00-BST
From: BUNDY HPS (on ERCC DEC-10) <bundy@edxa>
Reply-to: bundy@rutgers.arpa
Subject: Edinburgh AI Dept - A Description


THE DEPARTMENT OF ARTIFICIAL INTELLIGENCE AT EDINBURGH UNIVERSITY

Artificial Intelligence was recognised as a separate discipline by Edinburgh
University in 1966.  The Department in its present form was created in 1974.
During its existence it has steadily built up a programme of undergraduate and
post-graduate teaching and engaged in a vigorous research programme.  As the
only Department of Artificial Intelligence in any university, and as an
organisation which has made a major contribution to the development of the
subject, it is poised to play a unique role in the advance of Information
Technology which is seen to be a national necessity.

The Department collaborates closely with other departments within the
University in two distinct groupings.  Departments concerned with Cognitive
Science, namely A.I., Linguistics, Philosophy and Psychology all participate
in the School of Epistemics, which dates from the early 70's.  A new
development is an active involvement with Computer Science and Electrical
Engineering.  The 3 departments form the basis of the School of Information
Technology.  A joint MSc in Information Technology began in 1983.

A.I. are involved in collaborative activities with other institutions
which are significant in that they involve the transfer of people,
ideas and software.  In particular this involves MIT (robotics),
Stanford (natural language), Carnegie-Mellon (the PERQ machine) and
Grenoble (robotics).

Relationships with industry are progressing.  As well as a number of
development contracts, A.I. have recently had a teaching post funded by the
software house Systems Designers Ltd.  There, however, is a natural limit to
the extent to which a University Department can provide a service to industry:
consequently a proposal to create an Artificial Intelligence Applications
Institute has been put forward and is at an advanced stage of planning.  This
will operate as a revenue earning laboratory, performing a technology transfer
function on the model of organisations like the Stanford Research Institute or
Bolt Beranek and Newman.

Research in A.I.

A.I. is a new subject so that there is a very close relationship between
teaching at all levels, and research.  Artificial Intelligence is about making
machines behave in ways which exhibit some of the characteristics of
intelligence, and about how to integrate such capabilities into larger
coherent systems.  The vehicle for such studies has been the digital computer,
chosen for its flexibility.

A.I. Languages and Systems.

The development of high level programming languages has been crucial to all
aspects of computing because of the consequent easing of the task of
communicating with these machines.  Artificial Intelligence has given birth to
a distinctive series of languages which satisfy different design constraints
to those developed by Computer Scientists whose primary concern has been to
develop languages in which to write reliable and efficient programming systems
to perform standard computing tasks.  Languages developed in the Artificial
Intelligence field have been intended to allow people readily to try out ideas
about how a particular cognitive process can be mechanised.  Consequently they
have provided symbolic computation as well as numeric, and have allowed
program code and data to be equally manipulable.  They are also highly
interactive, and often integrated with a sophisticated text editor, so that
the iteration time for trying out a new idea can be rapid.

Edinburgh has made a substantial contribution to A.I. programming languages
(with significant cross fertilisation to the Computer Science world) and will
continue to do so.  POP-2 was designed and developed in the A.I. Department
by Popplestone and Burstall.  The development of Prolog has been more complex.
Kowalski first formulated the crucial idea of predicate logic as a programming
language during his period in the A.I. Department.  Prolog itself was designed
and first implemented in Marseille, as a result of Kowalski's interaction with
a research group there.  This was followed by a re-implementation at
Edinburgh, which demonstrated its potential as a practical tool.

To date the A.I. Department have supplied implementations of A.I. languages
to over 200 laboratories around the world, and are involved in an active
programme of Prolog systems development.

The current development in languages is being undertaken by a group supported
by the SERC, led by Robert Rae, and supervised by Dr Howe.  The concern of the
group is to provide language support for A.I. research nationwide, and to
develop A.I. software for a single user machine, the ICL PERQ.  The major goal
of this project is to provide the superior symbolic programming capability of
Prolog, in a user environment of the quality to be found in modern personal
computers with improved interactive capabilities.

Mathematical Reasoning.

If Artificial Intelligence is about mechanising reasoning, it has a close
relationship with logic which is about formalising mathematical reasoning, and
with the work of those philosophers who are concerned with formalising
every-day reasoning.  The development of Mathematical Logic during the 20th
century has provided a part of the theoretical basis for A.I.  Logic provides a
rigorous specification of what may in principle be deduced - it says little
about what may usefully be deduced.  And while it may superficially appear
straightforward to render ordinary language into logic, on closer examination
it can be seen to be anything but easy.

Nevertheless, logic has played a central role in the development of A.I. in
Edinburgh and elsewhere.  An early attempt to provide some control over the
direction of deduction was the resolution principle, which introduced a sort
of matching procedure called unification between parts of the axioms and parts
of a theorem to be proved.  While this principle was inadequate as a means of
guiding a machine in the proof of significant theorems, it survives in Prolog
whose equivalent of procedure call is a restricted form of resolution.

A.I. practioners still regard the automation of mathematical reasoning to
be a crucial area in A.I., but have moved from earlier attempts to find uniform
procedures for an efficient search of the space of possible deductions to the
creation of systems which embody expert knowledge about specific domains.  For
example if such a system is trying to solve a (non linear) equation, it may
adopt a strategy of using the axioms of algebra to bring two instances of the
unknown closer together with the "intention" of getting them to coalesce.
Work in mathematical reasoning is under the direction of Dr Bundy.

Robotics.

The Department has always had a lively interest in robotics, in particular in
the use of robots for assembly.  This includes the use of vision and force
sensing, and the design of languages for programming assembly robots.  Because
of the potential usefulness of fast moving robots, the Department has
undertaken a study of their dynamics behaviour, design and control.  The work
of the robot group is directed by Mr Popplestone.

A robot command language RAPT is under development:  this is intended to make
it easy for non-computer experts to program an assembly robot.  The idea is
that the assembly task should be programmed in terms of the job that is to be
done and how the objects are to be fitted together, rather than in terms of
how the manipulator should be moved.  This SERC funded work is steered by a
Robot Language Working Party which consists of industrialists and academics;
the recently formed Tripartite Study Group on Robot Languages extends the
interest to France and Germany.

An intelligent robot needs to have an internal representation of its world
which is sufficiently accurate to allow it to predict the results of planned
actions.  This means that, among other things, it needs a good representation
of the shapes of bodies.  While conventional shape modelling techniques permit
a hypothetical world to be represented in a computer they are not ideal for
robot applications, and the aim at Edinburgh is to combine techniques of shape
modelling with techniques used in A.I. so that the advantages of both may be
used.  This will include the ability to deal effectively with uncertainty.

Recently, in collaboration with GEC, the robotics group have begun to consider
how the techniques of spatial inference which have been developed can be
extended into the area of mechanical design, based on the observation that the
essence of any design is the relationship between part features, rather than
the specific quantitative details.  A proposal is being persued for a
demonstrator project to produce a small scale, but highly integrated "Design
and Make" system on these lines.

Work on robot dynamics, also funded by the SERC, has resulted in the
development of highly efficient algorithms for simulating standard serial
robots, and in a novel representation of spatial quantities, which greatly
simplifies the mathematics.

Vision and Remote Sensing.

The interpretation of data derived from sensors depends on expectations about
the structure of the world which may be of a general nature, for example that
continuous surfaces occupy much of the scene, or specific.  In manufacture the
prior expectations will be highly specific: one will know what objects are
likely to be present and how they are likely to be related to each other.  One
vision project in the A.I. Department is taking advantage of this in
integrating vision with the RAPT development in robotics - the prior
expectations are expressed by defining body geometry in RAPT, and by defining
the expected inter-body relationships in the same medium.

A robot operating in a natural environment will have much less specific
expectations, and the A.I. Department collaborate with the Heriot Watt
University to study the sonar based control of a submersible.  This involves
building a world representation by integrating stable echo patterns, which are
interpreted as objects.

Natural Language.

A group working in the Department of A.I. and related departments in the School
of Epistemics is studying the development of computational models of language
production, the process whereby communicative intent is transformed into
speech.  The most difficult problems to be faced when pursuing this goal cover
the fundamental issues of computation:  structure and process.  In the domain
of linguistic modelling, these are the questions of representation of
linguistic and real world knowledge, and the understanding of the planning
process which underlies speaking.

Many sorts of knowledge are employed in speaking - linguistic knowledge of how
words sound, of how to order the parts of a sentence to communicate who did
what to whom, of the meaning of words and phrases, and common sense knowledge
of the world.  Representing all of these is prerequisite to using them in a
model of language production.

On the other hand, planning provides the basis for approaching the issue of
organizing and controlling the production process, for the mind seems to
produce utterances as the synthetic, simultaneous resolution of numerous
partially conflicting goals - communicative goals, social goals, purely
linguistic goals - all variously determined and related.

The potential for dramatic change in the study of human language which is made
possible by this injection of dynamic concerns into what has heretofore been
an essentially static enterprise is vast, and the A.I. Department sees its
work as attempting to realise some of that potential.  The study of natural
language processing in the department is under the direction of Dr Thompson.

Planning Systems.

General purpose planning systems for automatically producing plans of action
for execution by robots have been a long standing theme of A.I. research.  The
A.I. Department at Edinburgh had a very active programme of planning research
in the mid 1970s and was one of the leading international centres in this
area.  The Edinburgh planners were applied to the generation of project plans
for large industrial activities (such as electricity turbine overhaul
procedures).  These planners have continued to provide an important source of
ideas for later research and development in the field.  A prototype planner in
use at NASA's Jet Propulsion Laboratory which can schedule the activities of a
Voyager-type planetary probe is based on Edinburgh work.

New work on planning has recently begun in the Department and is mainly
concerned with the interrelationships between planning, plan execution and
monitoring.  The commercial exploitation of the techniques is also being
discussed.  The Department's planning work is under the direction of Dr Tate.

Knowledge Based and Expert Systems.

Much of the A.I. Department's work uses techniques often referred to as
Intelligent Knowledge Based Systems (IKBS) - this includes robotics, natural
language, planning and other activities.  However, researchers in the
Department of A.I. are also directly concerned with the creation of Expert
Systems in Ecological Modelling, User Aids for Operating Systems, Sonar Data
Interpretation, etc.

Computers in Education.

The Department has pioneered in this country an approach to the use of
computers in schools in which children can engage in an active and creative
interaction with the computer without needing to acquire abstract concepts and
manipulative skills for which they are not yet ready.  The vehicle for this
work has been the LOGO language, which has a simple syntax making few demands
on the typing skills of children.  While LOGO is in fact equivalent to a
substantial subset of LISP, a child can get moving with a very small subset of
the language, and one which makes the actions of the computer immediately
concrete in the form of the movements of a "turtle" which can either be
steered around a VDU or in the form of a small mobile robot.

This approach has a significant value in Special Education.  For example in
one study an autistic boy found he was able to communicate with a "turtle",
which apparently acted as a metaphor for communicating with people, resulting
in his being able to use language spontaneously for the first time.  In
another study involving mildly mentally and physically handicapped youngsters
a touch screen device invoked procedures for manipulating pictorial materials
designed to teach word attack skills to non-readers.  More recent projects
include a diagnostic spelling program for dyslexic children, and a suite of
programs which deaf children can use to manipulate text to improve their
ability to use language expressively.  Much of the Department's Computers in
Education work is under the direction Dr Howe.
Teaching in the Department of A.I.

The Department is involved in an active teaching programme at undergraduate
and postgraduate level.  At undergraduate level, there are A.I.  first, second
and third year courses.  There is a joint honours degree with the Department
of Linguistics.  A large number of students are registered with the Department
for postgraduate degrees.  An MSc/PhD in Cognitive Science is provided in
collaboration with the departments of Linguistics, Philosophy and Psychology
under the aegis of the School of Epistemics.  The Department contributes two
modules on this:  Symbolic Computation and Computational Linguistics.  This
course has been accepted as a SERC supported conversion course.  In October
1983 a new MSc programme in IT started.  This is a joint activity with the
Departments of Computer Science and Electrical Engineering.  It has a large
IKBS content which is supported by SERC.

Computing Facilities in the Department of A.I.

Computing requirements of researchers are being met largely through the
SERC DEC-10 situated at the Edinburgh Regional Computing Centre or residually
through use of UGC facilities.  Undergraduate computing for A.I. courses is
supported by the EMAS facilities at ERCC.  Postgraduate computing on courses
is mainly provided through a VAX 11/750 Berkeley 4.1BSD UNIX system within the
Department.  Several groups in the Department use the ICL PERQ single user
machine.  A growth in the use of this and other single user machines is
envisaged over the next few years.  The provision of shared resources to these
systems in a way which allows for this growth in an orderly fashion is a
problem the Department wishes to solve.

It is anticipated that several further multi-user computers will soon be
installed - one at each site of the Department - to act as the hub of future
computing provision for the research pursued in Artificial Intelligence.

------------------------------

End of AIList Digest
********************

∂06-Oct-83  1525	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #71
Received: from SRI-AI by SU-AI with TCP/SMTP; 6 Oct 83  15:25:33 PDT
Date: Thursday, October 6, 1983 9:55AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #71
To: AIList@SRI-AI


AIList Digest            Thursday, 6 Oct 1983      Volume 1 : Issue 71

Today's Topics:
  Humor - The Lightbulb Issue in AI,
  Reports - Edinburgh AI Memos,
  Rational Psychology,
  Halting Problem,
  Artificial Organisms,
  Technology Transfer,
  Seminar - NL Database Updates
----------------------------------------------------------------------

Date: 6 Oct 83 0053 EDT (Thursday)
From: Jeff.Shrager@CMU-CS-A
Subject: The lightbulb issue in AI.

How many AI people does it take to change a lightbulb?

At least 55:

   The problem space group (5):
        One to define the goal state.
        One to define the operators.
        One to describe the universal problem solver.
        One to hack the production system.
        One to indicate about how it is a model of human lightbulb
         changing behavior.

   The logical formalism group (16):
        One to figure out how to describe lightbulb changing in
         first order logic.
        One to figure out how to describe lightbulb changing in
         second order logic.
        One to show the adequecy of FOL.
        One to show the inadequecy of FOL.
        One to show show that lightbulb logic is non-monotonic.
        One to show that it isn't non-monotonic.
        One to show how non-monotonic logic is incorporated in FOL.
        One to determine the bindings for the variables.
        One to show the completeness of the solution.
        One to show the consistency of the solution.
        One to show that the two just above are incoherent.
        One to hack a theorm prover for lightbulb resolution.
        One to suggest a parallel theory of lightbulb logic theorm
         proving.
        One to show that the parallel theory isn't complete.
        ...ad infinitum (or absurdum as you will)...
        One to indicate how it is a description of human lightbulb
         changing behavior.
        One to call the electrician.

   The robotics group (10):
        One to build a vision system to recognize the dead bulb.
        One to build a vision system to locate a new bulb.
        One to figure out how to grasp the lightbulb without breaking it.
        One to figure out how to make a universal joint that will permit
         the hand to rotate 360+ degrees.
        One to figure out how to make the universal joint go the other way.
        One to figure out the arm solutions that will get the arm to the
         socket.
        One to organize the construction teams.
        One to hack the planning system.
        One to get Westinghouse to sponsor the research.
        One to indicate about how the robot mimics human motor behavior
         in lightbulb changing.

   The knowledge engineering group (6):
        One to study electricians' changing lightbulbs.
        One to arrange for the purchase of the lisp machines.
        One to assure the customer that this is a hard problem and
         that great accomplishments in theory will come from his support
         of this effort. (The same one can arrange for the fleecing.)
        One to study related research.
        One to indicate about how it is a description of human lightbulb
         changing behavior.
        One to call the lisp hackers.

   The Lisp hackers (13):
        One to bring up the chaos net.
        One to adjust the microcode to properly reflect the group's
         political beliefs.
        One to fix the compiler.
        One to make incompatible changes to the primitives.
        One to provide the Coke.
        One to rehack the Lisp editor/debugger.
        One to rehack the window package.
        Another to fix the compiler.
        One to convert code to the non-upward compatible Lisp dialect.
        Another to rehack the window package properly.
        One to flame on BUG-LISPM.
        Another to fix the microcode.
        One to write the fifteen lines of code required to change the
         lightbulb.

   The Psychological group (5):
        One to build an apparatus which will time lightbulb
         changing performance.
        One to gather and run subjects.
        One to mathematically model the behavior.
        One to call the expert systems group.
        One to adjust the resulting system so that it drops the
         right number of bulbs.

[My apologies to groups I may have neglected.  Pages to code before
 I sleep.]

------------------------------

Date: Saturday, 1-Oct-83  15:13:42-BST
From: BUNDY HPS (on ERCC DEC-10) <bundy@edxa>
Reply-to: bundy@rutgers.arpa
Subject: Edinburgh AI Memos


        If you want to receive a regular abstracts list and order form
for Edinburgh AI technical reports then write (steam mail I'm afraid)
to Margaret Pithie, Department of Artificial Intelligence, Forrest
Hill, Edinburgh, Scotland.  Give your name and address and ask to be put
on the mailing list for abstracts.

                        Alan Bundy

------------------------------

Date: 29 Sep 83 22:49:18-PDT (Thu)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: Rational Psychology - (nf)
Article-I.D.: uiucdcs.3046


The book mentioned, Metaphors We Live By, was written by George Lakoff
and Mark Johnson.  It contains some excellent ideas and is written in a
style that makes for fast, enjoyable reading.

--Rick Dinitz
uicsl!dinitz

------------------------------

Date: 28 Sep 83 10:32:35-PDT (Wed)
From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Rational Psychology [and Reply]


I must say its been exciting listening to the analysis of what "Rational
Psychology" might mean or should not mean.  Should I go read the actual
article that started it all?  Perish the thought.  Is psychology rational?
Someone said that all sciences are rational, a moot point, but not that
relevant unless one wishes to consider Psychology a science.  I do not.
This does not mean that psychologists are in any way inferior to chemists
or to REAL scientists like those who study physics.  But I do think there
is a difference IN KIND between these fields and psychology.  Very few of
us have any close intimate relationships with carbon compounds or inter-
stellar gas clouds. (At least not since the waning of the LSD era.) But
with psychology, anyone NOT in this catagory has no business in the field.
(I presume we are talking Human psychology.)

The way this difference might exert itself is quite hard to predict, tho
in my brief foray into psychology it was not so hard to spot.  The great
danger is a highly amplified form of anthropomorphism which leads one to
form technical opinions quite possibly unrelated to technical or theoretical
analysis.  In physics, there is a superficially similar process in which
the scientist develops a theory which seems to be a "pet theory" and then
sets about trying to show it true or false.  The difference is that the
physicist developed his pet theory from technical origins rather than from
personal experience.  There is no other origin for his ideas unless you
speculate that people have a inborn understanding of psi-mesons or spin
orbitals.  Such theories MUST have developed from these ideas.  In
psychology, the theory may well have been developed from a big scary dog
when the psychologist was two.  THAT is a difference in kind, and I think
that is why I will always be suspicious of psychologists.
----GaryFostel----

[I think that is precisely the point of the call for rational psychology.
It is an attempt to provide a solid theoretical underpinning based on
the nature of mind, intelligence, emotions, etc., without regard to
carbon-based implementations or the necessity of explaining human psychoses.
As such, rational psychology is clearly an appropriate subject for
AIList and net.ai.  Traditional psychology, and subjective attacks or
defenses of it, are less appropriate for this forum.  -- KIL]

------------------------------

Date: 2 Oct 83 1:42:26-PDT (Sun)
From: ihnp4!ihuxv!portegys @ Ucb-Vax
Subject: Re: the Halting problem
Article-I.D.: ihuxv.565

I think that the answer to the halting problem in intelligent
entities is that there must exist a mechanism for telling it
whether its efforts are getting it anywhere, i.e. something that
senses its internal state and says if things are getting better,
worse, or whatever.  Normally for humans, if a "loop" were to
begin, it should soon be broken by concerns like "I'm hungry
now, let's eat".  No amount of cogitation makes that feeling
go away.

I would rather call this mechanism need than emotion, since I
think that some emotions are learned.

So then, needs supply two uses to intelligence: (1) they supply
a direction for the learning which is a necessary part of
intelligence, and (2) they keep the intelligence from getting
bogged down in fruitless cogitation.

             Tom Portegys
             Bell Labs, IH
             ihuxv!portegys

------------------------------

Date: 3 Oct 83 20:22:47 EDT  (Mon)
From: Speaker-To-Animals <speaker%umcp-cs@UDel-Relay>
Subject: Re:  Artificial Organisms

Why would we want to create machines equivelent to people when
organisms already have a means to reproduce themselves?

Because then we might be able to make them SMARTER than humans
of course!  We might also learn something about ourselves along
the way too.

                                                        - Speaker

------------------------------

Date: 30 Sep 83 1:16:31-PDT (Fri)
From: decvax!genrad!mit-eddie!barmar @ Ucb-Vax
Subject: November F&SF
Article-I.D.: mit-eddi.774

Some of you may be interested in reading Isaac Asimov's article in the
latest (November, I think) Magazine of Fantasy and Science Fiction.  The
article is entitled "More Thinking about Thinking", and is the Good
Doctor's views on artificial intelligence.  He makes a very good case
for the idea that non-human thinking (i.e. in computers and
dolphins) is likely to be very different, and perhaps superior to, human
thinking.  He uses an effective analogy to locomotion: artificial
locomotion, namely the wheel, is completely unlike anything found in
nature.
--
                        Barry Margolin
                        ARPA: barmar@MIT-Multics
                        UUCP: ..!genrad!mit-eddie!barmar

------------------------------

Date: Mon, 3 Oct 83 23:17:18 EDT
From: Brint Cooper (CTAB) <abc@brl-bmd>
Subject: Re:  Alas, I must flame...

I don't believe, as you assert, that the motive for clearing
papers produced under DOD sponsorship is 'econnomic' but, alas,
military.  You then may justly argue the merits of non-export
of things militarily important vs the benefuits which acaccrue
to all of us by a free and open exchange.

I'm not taking sides--yet., but am trying to see the issue
clearly defined.

Brint

------------------------------

Date: Tue, 4 Oct 83 8:16:20 EDT
From: Earl Weaver (VLD/VMB) <earl@brl-vat>
Subject: Flame on DoD

No matter what David Rogers @ sumex-aim thinks, the DoD "review" of all papers
before publishing is not to keep information private, but to make sure no
classified stuff gets out where it shouldn't be and to identify any areas
of personal opinion or thinking that could be construed to be official DoD
policy or position.  I think it will have very little effect on actually
restricting information.

As with most research organizations, the DoD researchers are not immune to the
powers of the bean counters and must publish.

------------------------------

Date: Mon 3 Oct 83 16:44:24-PDT
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Ph.D. oral

                      [Reprinted from the SU-SCORE bboard.]



                          Computer Science Department

                           Ph.D. Oral, Jim Davidson

                         October 18, 1983 at 2:30 p.m.

                            Rm. 303, Building 200

                Interpreting Natural Language Database Updates

Although the problems of querying databases in natural language are well
understood, the performance of database updates via natural language introduces
additional difficulties.  This talk discusses the problems encountered in
interpreting natural language updates, and describes an implemented system that
performs simple updates.

The difficulties associated with natural language updates result from the fact
that the user will naturally phrase requests with respect to his conception of
the domain, which may be a considerable simplification of the actual underlying
database structure.  Updates that are meaningful and unambiguous from the
user's standpoint may not translate into reasonable changes to the underlying
database.

The PIQUE system (Program for Interpretation of Queries and Updates in English)
operates by maintaining a simple model of the user, and interpreting update
requests with respect to that model.  For a given request, a limited set of
"candidate updates"--alternative ways of fulfilling the request--are
considered, and ranked according to a set of domain-independent heuristics that
reflect general properties of "reasonable" updates.  The leading candidate may
be performed, or the highest ranking alternatives presented to the user for
selection.  The resultant action may also include a warning to the user about
unanticipated side effects, or an explanation for the failure to fulfill a
request.

This talk describes the PIQUE system in detail, presents examples of its
operation, and discusses the effectiveness of the system with respect to
coverage, accuracy, efficiency, and portability.  The range of behaviors
required for natural language update systems in general is discussed, and
implications of updates on the design of data models are briefly considered.

------------------------------

End of AIList Digest
********************

∂10-Oct-83  1623	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #72
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Oct 83  16:22:34 PDT
Date: Monday, October 10, 1983 10:16AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #72
To: AIList@SRI-AI


AIList Digest            Monday, 10 Oct 1983       Volume 1 : Issue 72

Today's Topics:
  Administrivia - AIList Archives,
  Music & AI - Request,
  NL - Semantic Chart Parsing & Simple English Grammar,
  AI Journals - Address of "Artificial Intelligence",
  Alert - IEEE Computer Issue,
  Seminars - Stanfill at Univ. of Maryland, Zadeh at Stanford,
  Commonsense Reasoning
----------------------------------------------------------------------

Date: Sun 9 Oct 83 18:03:24-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: AIList Archives

The archives have grown to the point that I can no longer
keep them available online.  I will keep the last three month's
issues available in <ailist>archive.txt on SRI-AI.  Preceding
issues will be backed up on tape, and will require about a
day's notice to recover.  The tape archive will consist of
quarterly composites (or smaller groupings, if digest activity
gets any higher than it has been).  The file names will be of
the form AIL1N1.TXT, AIL1N19.TXT, etc.  All archives will be in
the MMAILR mailer format.

The online archive may be obtained via FTP using anonymous login.
Since a quarterly archive can be very large (up to 300 disk pages)
it will usually be better to ask me for particuar issues than to
FTP the whole file.

                                        -- Ken Laws

------------------------------

Date: Thu, 25 Aug 83 00:07:53 PDT
From: uw-beaver!utcsrgv!nixon@LBL-CSAM
Subject: AIList Archive- Univ. of Toronto

[I previously put out a request for online archives that could
be obtained by anonymous FTP.  There were very few responses.
Perhaps this one will be of use.  -- KIL]


Dear Ken,
  Copies of the AIList Digest are kept in directory /u5/nixon/AIList
with file names V1.5, V1.40, etc.  Our uucp site name is "utcsrgv".
This is subject to change in the very near future as the AI group at the
University of Toronto will be moving to a new computer.
  Brian Nixon.

------------------------------

Date: 4 Oct 83 9:23:38-PDT (Tue)
From: hplabs!hao!cires!nbires!ut-sally!riddle @ Ucb-Vax
Subject: Re: Music & AI, pointers wanted
Article-I.D.: ut-sally.86

How about posting the results of the music/ai poll to the net?  There
have been at least two similar queries in recent memory, indicating at
least a bit of general interest.

[...]

                                 -- Prentiss Riddle
                                    {ihnp4,kpno,ctvax}!ut-sally!riddle
                                    riddle@ut-sally.UUCP

------------------------------

Date: 5 Oct 83 19:54:32-PDT (Wed)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: Re: NL argument between STLH and Per - (nf)
Article-I.D.: uiucdcs.3132

I've heard of "syntactic chart parsing," but what is "semantic chart
parsing?"  It sounds interesting, and I'd like to hear about it.

I'm also interested in seeing your paper.  Please make arrangements with me
via net mail.

Rick Dinitz
U. of Illinois
...!uicsl!dinitz

------------------------------

Date: 3 Oct 83 18:39:00-PDT (Mon)
From: pur-ee!ecn-ec.davy @ Ucb-Vax
Subject: WANTED: Simple English Grammar - (nf)
Article-I.D.: ecn-ec.1173


Hello,

        I am looking for a SIMPLE set of grammar rules for English.  To
be specific, I'm looking for something of the form:

                SENT = NP + VP ...
                  NP = DET + ADJ + N ...
                  VP = ADV + V + DOBJ ...

                      etc.

I would prefer a short set of rules, something on the order of one or two
hundred lines.  I realize that this isn't enough to cover the whole English
language, I don't want it to.  I just want something which could handle
"simple" sentences, such as "The cat chased the mouse", etc.  I would like
to have rules for questions included, so that something like "What does a
hen weigh?" can be covered.

        I've scoured our libraries here, and have only found one book with
a grammar for English in it, and it's much more complex than what I want.
Any pointers to books/magazines or grammars themselves would be greatly
appreciated.

Thanks in advance (as the saying goes)
--Dave Curry
decvax!pur-ee!davy
eevax.davy@purdue

------------------------------

Date: 6 Oct 83 17:21:29-PDT (Thu)
From: ihnp4!cbosgd!cbscd5!lvc @ Ucb-Vax
Subject: Address of "Artificial Intelligence"
Article-I.D.: cbscd5.739

Here is the address of "Artificial Intelligence" if anyone is interested:

    Artificial Intelligence  (bi-monthly $136 -- Ouch !)
    North-Holland Publishing Co.,
    Box 211, 1000 AE
    Amsterdam, Netherlands.

    Editors D.G. Bobrow, P.J. Hayes

    Advertising, book reviews, circulation 1,100

    Also avail. in microform from

    Microforms International Marketing Co.
    Maxwell House
    Fairview Park
    Elmsford NY 10523

    Indexed: Curr. Cont.

Larry Cipriani
cbosgd!cbscd5!lvc

[There is a reduced rate for members of AAAI. -- KIL]

------------------------------

Date: Sun 9 Oct 83 17:45:52-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: IEEE Computer Issue

Don't miss the October 1983 issue of IEEE Computer.  It is a
special issue on knowledge representation, and includes articles
on learning, logic, and other related topics.  There is also a
short list of 30 expert system on p. 141.

------------------------------

Date: 8 Oct 83 04:18:04 EDT  (Sat)
From: Bruce Israel <israel%umcp-cs@UDel-Relay>
Subject: University of Maryland AI talk

        [Reprinted from the University of Maryland BBoard]

The University of Maryland Computer Science Dept. is starting an
informal AI seminar, meeting every other Thursday in Room 2330,
Computer Science Bldg, at 5pm.

The first meeting will be held Thursday, October 13.  All are welcome
to attend.  The abstract for the talk follows.

                              MAL: My AI Language

                                Craig Stanfill
                        Department of Computer Science
                            University of Maryland
                            College Park, MD 20742

     In the course of writing my thesis, I implemented an AI  language,  called
MAL,  for  manipulating  symbolic  expressions.   MAL runs in the University of
Maryland Franz Lisp Environment on a VAX 11/780 under Berkely  Unix  (tm)  4.1.
MAL  is  of  potential  benefit  in knowledge representation research, where it
allows the development and testing of knowledge representations without  build-
ing  an  inference engine from  scratch,  and in AI education, where it  should
allow students to experiment with a  simple AI programming language.  MAL  pro-
vides for:

1.   The  representation  of  objects  and  queries  as  symbolic  expressions.
     Objects  are  recursively  constructed from sets, lists, and bags of atoms
     (as in QLISP).  A powerful and efficient pattern matcher is provided.

2.   The rule-directed simplification of expressions.  Limited  facilities  for
     depth first search are provided.

3.   Access to a database.  Rules  can  assert  and  fetch  simplifications  of
     expressions.  The database also employs a truth maintenance system.

4.   The construction of large AI systems by the combination of simpler modules
     called domains.  For each domain, there is a database, a set of rules, and
     a set of links to other domains.

5.   A set of domains which are generally useful, especially for  spatial  rea-
     soning.   This  includes  domains  for  solid and linear geometry, and for
     algebra.

6.   Facilities which allow the user to customize MAL (to a degree).  Calls  to
     arbitrary LISP functions are supported, allowing the language to be easily
     extended.

------------------------------

Date: Thu 6 Oct 83 20:18:09-PDT
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: Colloquium Oct 11: ZADEH

                [Reprinted from the SU-SCORE bboard.]


Professor Lotfi Zadeh, of UCB,  will be giving the CS colloquium this
Tuesday (10/11).  As usual, it  will be in Terman Auditorium, at 4:15
(preceded at 3:45 by refreshments in the 3rd floor lounge of Margaret
Jacks Hall).

The title and abstract for the colloquium are as follows:

Reasoning With Commonsense Knowledge

Commonsense knowledge is exemplified  by "Glass is brittle," "Cold is
infectious,"  "The rich are  conservative," "If  a car is  old, it is
unlikely to  be in good shape," etc.  Such  knowledge forms the basis
for most of human reasoning in everyday situations.

Given  the pervasiveness  of commonsense reasoning,  a question which
begs for answer is: Why  is commonsense reasoning a neglected area in
classical  logic?    Because,  almost   by  definition,   commonsense
knowledge  is  that  knowledge   which  is  not  representable  as  a
collection  of  well-formed  formulae in  predicate  logic  or  other
logical  systems which  have the  same basic  conceptual structure as
predicate logic.

The approach to commonsense  reasoning which is described in the talk
is based on the use of fuzzy logic -- a logic which allows the use of
fuzzy predicates, fuzzy  quantifiers and fuzzy truth-values.  In this
logic,  commonsense  knowledge  is defined  to  be  a  collection  of
dispositions, that is propositions with suppressed fuzzy quantifiers.
To infer  from such knowledge, three  basic syllogisms are developed:
(1)   the   intersection/product  syllogism;   (2)   the   consequent
conjunction syllogism; and  (3) the antecedent conjunction syllogism.
The  use of  these  syllogisms  in commonsense  reasoning  and  their
application to  the  combination of  evidence  in expert  systems  is
discussed and illustrated by examples.

------------------------------

Date: Fri 7 Oct 83 09:42:30-PDT
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM>
Subject: "rich" = "conservative" ?

                [Reprinted from the SU-SCORE bboard.]


        Subject: Colloquium Oct 11: ZADEH
        The title and abstract for the colloquium are as follows:
        Reasoning With Commonsense Knowledge

I don't think I've seen flames in response to abstracts before, but I get
so sick of hearing "rich," "conservative," and "evil" used as synonyms.

    Commonsense knowledge is exemplified by [...] "The rich are
    conservative," [...].

In fact, in the U.S., 81% of people with incomes over $50,000 are
registered Democrats.  Only 47% with incomes under $50,000 are.  (The
remaining 53% are made up of "independents," &c..)  The Democratic
Party gets the majority of its funding from contributions of over
$1000 apiece.  The Republican Party is mostly funded by contributions
of $10 and under.  (Note: I'd be the last to equate Conservatism and
the Republican Party.  I am a Tory and a Democrat.  However, more
"commonsense knowledge" suggests that I can use the word "Republican"
in place of "conservative" for the purpose of refuting the equation
of "rich" and "conservative."

    Such knowledge forms the basis for most of human reasoning in everyday
    situations.

This statement is so true that it is the reason I gave up political writing.

    Given  the pervasiveness  of commonsense reasoning,  a question which
    begs for answer is: Why  is commonsense reasoning a neglected area in
    classical  logic? [...]

Perhaps because false premeses tend to give rise to false conclusions?  Just
what we need--"ignorant systems."  (:-)
--Christopher

------------------------------

Date: Fri 7 Oct 83 10:22:37-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM>
Subject: Re: "rich" = "conservative" ?

                [Reprinted from the SU-SCORE bboard.]


Why is logic a neglected area in commonsense reasoning?  (to say nothing of
political writing)?

More seriously, or at least more historically, a survey was once taken of
ecological and other pressure groups in England, asking them which had been the
most and least effective methods they had used to convince governmental bodies.
Right at the bottom of the list of "least effective" was Reasoned Argument.

                                        - Richard

------------------------------

Date: Fri, 7 Oct 83 10:36 PDT
From: Vaughan Pratt <pratt@Navajo>
Subject: Reasoned Argument

                [Reprinted from the SU-SCORE bboard.]

[...]

I think if "Breathing" had been on the least along with "Reasoned
Argument" then the latter would only have come in second last.
It is not that reasoned argument is ineffective but that it is on
a par with breathing, namely something we do subconsciously.  Consciously
performed reasoning is only marginally reliable in mathematical circles,
and quite unreliable in most other areas.  It makes most people dizzy,
much as consciously performed breathing does.

-v

------------------------------

End of AIList Digest
********************

∂10-Oct-83  2157	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #73
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Oct 83  21:55:56 PDT
Date: Monday, October 10, 1983 4:17PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #73
To: AIList@SRI-AI


AIList Digest            Tuesday, 11 Oct 1983      Volume 1 : Issue 73

Today's Topics:
  Halting Problem,
  Conciousness,
  Rational Psychology
----------------------------------------------------------------------

Date: Thu 6 Oct 83 18:57:04-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Halting problem discussion

This discussion assumes that "human minds" are at least equivalent
to Universal Turing Machines. If they are restricted to computing
smaller classes of recursive functions, the question dissolves.

Sequential computers are idealized as having infinite memory because
that makes it easier to study mathematically asymptotic behavior.  Of
course, we all know that a more accurate idealization of sequential
computers is the finite automaton (for which there is no halting
problem, of course!).

The discussion on this issue seemed to presuppose that "minds" are the
same kind of object as existing (finite!) computing devices. Accepting
this presupposition for a moment (I am agnostic on the matter), the
above argument applies and the discussion is shown to be vacuous.

Thus fall undecidability arguments in psychology and linguistics...

Fernando Pereira

PS. Any silliness about unlimited amounts of external memory
will be profitably avoided.

------------------------------

Date: 7 Oct 83 1317 EDT (Friday)
From: Robert.Frederking@CMU-CS-A (C410RF60)
Subject: AI halting problem

        Actually, this isn't a problem, as far as I can see.  The Halting
Problem's problem is: there cannot be a program for a Turing-equivalent
machine that can tell whether *any* arbitrary program for that machine will
halt.  The easiest proof that a Halts(x) procedure can't exist is the
following program:  (due to Jon Bentley, I believe)
        if halts(x) then
                while true do print("rats")
What happens when you start this program up, with itself as x?  If
halts(x) returns true, it won't halt, and if halts(x) returns false, it
will halt.  This is a contradiction, so halts(x) can't exist.

        My question is, what does this have to do with AI?  Answer, not
much.  There are lots of programs which always halt.  You just can't
have a program which can tell you *for* *any* *program* whether it will
halt.  Furthermore, human beings don't want to halt, i.e., die (this
isn't really a problem, since the question is whether their mental
subroutines halt).

        So as long as the mind constructs only programs which will
definitely halt, it's safe.  Beings which aren't careful about this
fail to breed, and are weeded out by evolution.  (Serves them right.)
All of this seems to assume that people are Turing-equivalent (without
pencil and paper), which probably isn't true, and certainly hasn't been
proved.  At least I can't simulate a PDP-10 in my head, can you?  So
let's get back to real discussions.

------------------------------

Date: Fri,  7 Oct 83 13:05:16 CDT
From: Paul.Milazzo <milazzo.rice@Rand-Relay>
Subject: Looping in humans

Anyone who believes the human mind incapable of looping has probably
never watched anyone play Rogue :-).  The success of Rogomatic (the
automatic Rogue-playing program by Mauldin, et. al.) demonstrates that
the game can be played by deriving one's next move from a simple
*fixed* set of operations on the current game state.

Even in the light of this demonstration, Rogue addicts sit hour after
hour mechanically striking keys, all thoughts of work, food, and sleep
forgotten, until forcibly removed by a girl- or boy-friend or system
crash.  I claim that such behavior constitutes looping.

:-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-)

                                Paul Milazzo <milazzo.rice@Rand-Relay>
                                Dept. of Mathematical Sciences
                                Rice University, Houston, TX

P.S.    A note to Rogue fans:  I have played a few games myself, and
        understand the appeal.  One of the Rogomatic developers is a
        former roommate of mine interested in part in overcoming the
        addiction of rogue players everywhere.  He, also, has played
        a few games...

------------------------------

Date: 5 Oct 83 9:55:56-PDT (Wed)
From: hplabs!hao!seismo!philabs!cmcl2!floyd!clyde!akgua!emory!gatech!owens
      @ Ucb-Vax
Subject: Re: a definition of consciousness?
Article-I.D.: gatech.1379

     I was doing required reading for a linguistics class when I
came across an interesting view of consciousness in "Foundations
of the Theory of Signs", by Charles Morris, section VI, subsection
12, about the 6th paragraph (Its also in the International
Encyclopedia of Unified Science, Otto Neurath, ed.).
     to say that Y experiences X is to define a relation E of which
Y is the domain and X is the range.  Thus, yEx says that it is true
that y experiences x.  E does not follow normal relational rules
(not transitive or symmetric.  I can experience joe, and joe can
experience fred, but it's not nessesarily so that I thus experience
fred.)  Morris goes on to state that yEx is a "conscious experience"
if yE(yEx) ALSO holds, otherwise it's an "unconscious experience".
     Interesting.  Note that there is no infinite regress of
yE(yE(yE....)) that is usually postulated as being a consequence of
computer consciousness.  However the function that defines E is defined,
it only needs to have the POTENTIAL of being able to fit yEx as an x in
another yEx, where y is itself.  Could the fact that the postulated
computer has the option  of NOT doing the insertion be some basis for
free will???  Would the required infinite regress of yE(yE(yE....
manifest some sort of compulsiveness that rules out free will?? (not to
say that an addict of some sort has no free will, although it's worth
thinking about).
     Question:  Am I trivializing the problem by making the problem of
consiousness existing or not being the ability to define the relation
E?  Are there OTHER questions that I haven't considered that would
strengthen or weaken that supposition?  No flames, please, since this
ain't a flame.

                                        G. Owens
                                        at gatech  CSNET.

------------------------------

Date: 6 Oct 83 9:38:19-PDT (Thu)
From: ihnp4!ihuxr!lew @ Ucb-Vax
Subject: towards a calculus of the subjective
Article-I.D.: ihuxr.685

I posted some articles to net.philosophy a while back on this topic
but I didn't get much of rise out of anybody. Maybe this is a better
forum. (Then again, ...) I'm induced to try here by G. Owens article,
"Re: definition of consciousness".

Instead of trying to formulate a general characteristic of conscious
experience, what about trying to characterize different types of subjective
experience in terms of their physical correlates? In particular, what's
the difference between seeing a color (say) and hearing a sound? Even
more particularly, what's the difference between seeing red, and seeing blue?

I think the last question provides a potential experimental test of
dualism. If it could be shown that the subjective experience of a red
image was constituted by an internal set of "red" image cells, and similarly
for a blue image, I would regard this as a proof of dualism. This is
assuming the "red" and "blue" cells to be physically equivalent. The
choice between which were "red" and which were "blue" would have no
physical basis.

On the other hand, suppose there were some qualitative difference in
the firing patterns associated with seeing red versus seeing blue.
We would have a physical difference to hang our hat on, but we would
still be left with the problem of forming a calculus of the subjective.
That is, we would have to figure out a way to deduce the type of subjective
experience from its physical correlates.

A successful effort might show how to experience completely new colors,
for example. Maybe our restriction to a 3-d color space is due to
the restricted stimulation of subjective color space by three inputs.
Any acid heads care to comment?

These thoughts were inspired by Thomas Nagel's "What is it like to be a bat?"
in "The Minds I". I think the whole subjective-objective problem is
given short shrift by radical AI advocates. Hofstadter's critique of
Nagel's article was interesting, but I don't think it addressed Nagel's
main point.

        Lew Mammel, Jr. ihuxr!lew

------------------------------

Date: 6 Oct 83 10:06:54-PDT (Thu)
From: ihnp4!zehntel!tektronix!tekecs!orca!brucec @ Ucb-Vax
Subject: Re: Parallelism and Physiology
Article-I.D.: orca.179

                               -------
Re the article posted by Rik Verstraete <rik@UCLA-CS>:

In general, I agree with your statements, and I like the direction of
your thinking.  If we conclude that each level of organization in a
system (e.g. a conscious mind) is based in some way on the next lower
level, it seems reasonable to suppose that there is in some sense a
measure of detail, a density of organization if you will, which has a
lower limit for a given level before it can support the next level.
Thus there would be, in the same sense, a median density for the
levels of the system (mind), and a standard deviation, which I
conjecture would be bounded in any successful system (only the top
level is likely to be wildly different in density, and that lower than
the median).

        Maybe the distinction between the words learning and
        self-organization is only a matter of granularity too. (??)

I agree.  I think that learning is simply a sophisticated form of
optimization of a self-organizing system in a *very* large state
space.  Maybe I shouldn't have said "simply."  Learning at the level of
human beings is hardly trivial.

        Certainly, there are not physically two types of memories, LTM
        and STM.  The concept of LTM/STM is only a paradigm (no doubt a
        very useful one), but when it comes to implementing the concept,
        there is a large discrepancy between brains and machines.

Don't rush to decide that there aren't two mechanisms.  The concepts of
LTM and STM were developed as a result of observation, not from theory.
There are fundamental functional differences between the two.  They
*may* be manifestations of the same physical mechanism, but I don't
believe there is strong evidence to support that claim.  I must admit
that my connection to neurophysiology is some years in the past
so I may be unaware of recent research.  Does anyone out there have
references that would help in this discussion?

------------------------------

Date: 7 Oct 83 15:38:14-PDT (Fri)
From: harpo!floyd!vax135!ariel!norm @ Ucb-Vax
Subject: Re: life is but a dream
Article-I.D.: ariel.482

re Michael Massimilla's idea (not original, of course) that consciousness
and self-awareness are ILLUSIONS.  Where did he get the concept of ILLUSION?
The stolen concept fallacy strikes again!  This fallacy is that of using
a concept while denying its genetic roots... See back issues of the Objectivist
for a discussion of this fallacy.... --Norm on ariel, Holmdel, N.J.

------------------------------

Date: 7 Oct 83 11:17:36-PDT (Fri)
From: ihnp4!ihuxr!lew @ Ucb-Vax
Subject: life is but a dream
Article-I.D.: ihuxr.690

Michael Massimilla informs us that consciousness and self-awareness are
ILLUSIONS. This is like saying "It's all in your mind." As Nietzsche said,
"One sometimes remains faithful to a cause simply because its opponents
do not cease to be insipid."

        Lew Mammel, Jr. ihuxr!lew

------------------------------

Date: 5 Oct 83 1:07:31-PDT (Wed)
From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Rational Psychology
Article-I.D.: ncsu.2357


Someone's recent attempt to make the meaning of "Rational Psychology" seem
trivial misses the point a number of people have made in commenting on the
odd nature of the name.  The reasoning was something like this:
 1) rational "X" means the same thing in spite of what "X" is.
 2) => rational psychology is a clear and simple thing
 3) wake up guys, youre being dumb.

Well, I think this line misses at least one point.  The argument above
is probably sound provided one accepts the initial premise, which I do not
neccessarily accept.  Another example of the logic may help.
 1) Brute Force elaboration solve problems of set membership.  E.g. just
    look at the item and compare it with every member of the set.  This
    is a true statement for a wide range of possible sets.
 2) Real Numbers are a kind of set.
 3) Wake up Cantor, you're wasting (or have wasted) your time.
It seems quite clear that in the latter example, the premise is naive and
simply fails to apply to sets of infinite proportions. (Or more properly
one must go to some effort to justify such use.)

The same issue applies to the notion of Rational Psychology.  Does it make
sense to attempt to apply techniques which may be completely inadequate?
Rational analysis may fail completely to explain the workings of the mind,
esp when we are looking at the "non-analytic" capabilities that are
implied by psychology.  We are on the edge of a philosophical debate, with
terms like "dual-ism" and "phsical-ism" etc marking out party lines.

It may be just as ridiculous to some people to propose a rational study
of psychology as it seems to most of us that one use finite analysis
to deal with trans-finite cardinalities [or] as it seems to some people to
propose to explain the mind via physics alone.  Clearly, the people who
expect rational analytic method to be fruitful in the field of psychology
are welcome to coin a new name for themselve.  But if they, or anyone else
has really "Got it now" please write a dissertation on the subject and
enter history along side Kant, St Thomas Aquinus, Kierkergard ....
----GaryFostel----

------------------------------

Date: 4 Oct 83 8:54:09-PDT (Tue)
From: decvax!linus!philabs!seismo!rlgvax!cvl!umcp-cs!velu @ Ucb-Vax
Subject: Rational Psychology - Gary Fostel's message
Article-I.D.: umcp-cs.2953

Unfortunately, however, many pet theories in Physics have come about as
inspirations, and not from the "technical origins" as you have stated!
(What is a "technical origin", anyway????)

As I see it, in any science a pet theory is a combination of insight,
inspiration, and a knowledge of the laws governing that field. If we
just went by known facts, and did not dream on, we would not have
gotten anywhere!

                                        - Velu
                                -----
Velu Sinha, U of MD, College Park
UUCP:   {seismo,allegra,brl-bmd}!umcp-cs!velu
CSNet:  velu@umcp-cs            ARPA:   velu.umcp-cs@UDel-Relay

------------------------------

Date: 6 Oct 83 12:00:15-PDT (Thu)
From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Intuition in Physics
Article-I.D.: ncsu.2360


Some few days ago I suggested that there was something "different"
about psychology and tried to draw a distinction between the flash
of insight or the pet theory in physics as compared to psychology.

Well, someone else commented on the original, in a way that sugested
I missed the mark in my original effort to make it clear. One more time:

I presume that at birth, one's mind is not predisposed to one or another
of several possible theories of heavy molecule collision (for example.)
Further, I think it unlikely that personal or emotional interaction in
one "pre-analytic" stage (see anything about developmental psych.) is
is likely to bear upon one's opinions about those molecules. In fact I
find it hard to believe that anything BUT technical learning is likely
to bear on one's intuition about the molecules. One might want to argue
that one's personality might force you to lean towards "aggressive" or
overly complex theories, but I doubt that such effects will lead to
the creation of a theory.  Only a rather mild predisposition at best.

In psychology it is entirely different.  A person who is aggresive has
lots of reasons to assume everyone else is as well. Or paranoid, or
that rote learning is esp good or bad, or that large dogs are dangerous
or a number of other things that bear directly on one's theories of the
mind.  And these biases are aquired from the process of living and are
quite un-avoidable.  This is not technical learning.  The effect is
that even in the face of considerable technical learning, one's intuition
or "pet theories" in psychology might be heavily influenced in creation
of the theory as well as selection, by one's life experiences, possibly
to the exclusion of one's technical opinions. (Who knows what goes on in
the sub-conscious.)  While one does not encounter heavy molecules often
in one's everyday life or one's childhood, one DOES encounter other people
and more significantly one's own mind.

It seems clear that intuition in physics is based upon a different sort
of knowledge than intuition about psychology.  The latter is a combination
of technical AND everyday intuition while the former is not.
----GaryFostel----

------------------------------

End of AIList Digest
********************

∂11-Oct-83  1950	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #74
Received: from SRI-AI by SU-AI with TCP/SMTP; 11 Oct 83  19:49:59 PDT
Date: Tuesday, October 11, 1983 11:25AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #74
To: AIList@SRI-AI


AIList Digest           Wednesday, 12 Oct 1983     Volume 1 : Issue 74

Today's Topics:
  Journals - AI Journal,
  Query - Miller's "Living Systems",
  Technology Transfer - DoD Reviews,
  Conciousness
----------------------------------------------------------------------

Date: Tue, 11 Oct 83 07:54 PDT
From: Bobrow.PA@PARC-MAXC.ARPA
Subject: AI Journal

The information provided by Larry Cipriani about the AI Journal in the
last issue of AINET is WRONG in a number of important particulars.
Institutional subscriptions to the Artificial Intelligence Journal are
$176 this year (not $136).  Personal subscriptions are available
for $50 per year for members of the AAAI, SIGART  and AISB.  The
circulation is about 2,000 (not 1,100).  Finally, the AI journal
consists of eight issues this year, and nine issues next year (not
bimonthly).
Thanks
Dan Bobrow (Editor-in-Chief)
Bobrow@PARC

------------------------------

Date: Mon, 10 Oct 83 15:41 EDT
From: David Axler <Axler.UPenn@Rand-Relay>
Subject: Bibliographic Query

     Just wondering if anybody out there has read the book 'Living Systems'
by James G. Miller (Mc Graw - Hill, 1977)., and, if so, whether they feel that
Miller's theories have any relevance to present-day AI research.  I won't
even attempt to summarize the book's content here, as it's over 1K pages in
length, but some of the reviews of it that I've run across seem to imply that
it might well be useful in some AI work.

     Any comments?

   Dave Axler (Axler.Upenn-1100@UPenn@Udel-Relay)

------------------------------

Date: 7 Oct 1983 08:11-EDT
From: TAYLOR@RADC-TOPS20
Subject: DoD "reviews"


I must agree with Earl Weaver's comments on the DoD review of DoD
sponsored publications with one additional comment...since I have
"lived and worked" in that environment for more than six years.
DoD has learned (through experience) that given enough
unclassified material, much classified information can be
deduced.  I have seen documents whose individual paragraphs were
unclassified, but when grouped to gether as a single document it
provided too much sensitive information to leave unclassified.
      Roz (RTaylor@RADC-MULTICS)

------------------------------

Date: 4 Oct 83 19:25:13-PDT (Tue)
From: ihnp4!zehntel!tektronix!tekcad!ricks @ Ucb-Vax
Subject: Re: Conference Announcement - (nf)
Article-I.D.: tekcad.66


>              ****************  CONFERENCE  ****************
>
>                     "Intelligent Systems and Machines"
>
>                    Oakland University, Rochester Michigan
>
>                                April 24-25, 1984
>
>              *********************************************
>
>AUTHORS PLEASE NOTE:  A Public Release/Sensitivity Approval is necessary.
>Authors from DOD, DOD contractors, and individuals whose work is government
>funded must have their papers reviewed for public release and more
>importantly sensitivity (i.e. an operations security review for sensitive
>unclassified material) by the security office of their sponsoring agency.


        Another example of so called "scientists" bowing to governmental
pressure to let them decide if the paper you want to publish is OK to
publish. I think that this type of activity is reprehensible and as con-
cerned scientists we should do everything in our power to stop this cen-
sorship of research. I urge everyone to boycott this conference and any
others like it which REQUIRE a Public Release/Sensitivty Approval (funny
how the government tries to make censorship palatible with different words,
isn't it). If we don't stop this now, we may be passing every bit of research
we do under the nose of bureaucrats who don't know an expert system from
an accounting package and who have the power to stop publication of anything
they consider dangerous.
                                        I'm mad as hell and I'm not going to
                                                take it anymore!!!!
                                                Frank Adrian
                                                (teklabs!tekcad!franka)

------------------------------

Date: 6 Oct 83 6:13:46-PDT (Thu)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!aplvax!eric @ Ucb-Vax
Subject: Re: Alas, I must flame...
Article-I.D.: aplvax.358

        The "sensitivity" issue is not limited to government - most
companies also limit the distribution of information that they
consider "company private". I find very little wrong with the
idea of "we paid for it, we should benefit from it". The simple
truth is that they did underwrite the cost of the research. No one
is forced to work under these conditions, but if you want to take
the bucks, you have to realize that there are conditions attached
to them. On the whole, DoD has been amazingly open with the disclosure
of it CS research - one big example is ARPANET. True, they are now
wanting to split it up, but they are still leaving half of it to
research facilities who did not foot the bill for its development.
Perhaps it can be carried to extremes (I have never seen that happen,
but lets assume it that it can happen), they contracted for the work
to be done, and it is theirs to do with as they wish.

--
                                        eric
                                        ...!seismo!umcp-cs!aplvax!eric

------------------------------

Date: 7 Oct 83 18:56:18-PDT (Fri)
From: npois!hogpc!houti!ariel!vax135!floyd!cmcl2!csd1!condict@Ucb-Vax
Subject: Re: the Halting problem.
Article-I.D.: csd1.124

                     [Very long article.]


Self-awareness is an illusion?  I've heard this curious statement
before and never understood it.  YOUR self-awareness may be an
illusion that is fooling me, and you may think that MY self-awareness
is an illusion, but one thing that you cannot deny (the very, only
thing that you know for sure) is that you, yourself, in there looking
out at the world through your eyeballs, are aware of yourself doing
that.  At least you cannot deny it if it is true.  The point is, I
know that I have self-awareness -- by the very act of experiencing
it.  You cannot take this away from me by telling me that my
experience is an illusion.  That is a patently ludicrous statement,
sillier even then when your mother (no offense -- okay, my mother,
then) used to tell you that the pain was all in your head.  Of course
it is!  That is exactly what the problem is!

Let me try to say this another way, since I have never been able to
get this across to someone who doesn't already believe it.  There are
some statements that are true by definition, for instance, the
statement, "I pronounce you man and wife".  The pronouncement happens
by the very saying of it and cannot be denied by anyone who has heard
it, although the legitimacy of the marriage can be questioned, of
course.  The self-awareness thing is completely internal, so you may
sensibly question the statement "I have self-awareness" when it comes
from someone else.  What you cannot rationally say is "Gee, I wonder
if I really am aware of being in this body and looking down at my
hands with these two eyes and making my fingers wiggle at will?"  To
ask this ques- tion seriously of yourself is an indication that you
need immediate psychiatric help.  Go directly to Bellvue and commit
yourself.  It is as lunatic a question as asking yourself "Gee, am I
really feeling this pain or is it only an illusion that I hurt so bad
that I would happily throw myself in the trash masher to extinguish
it?"

For those of you who misunderstand what I mean by self-awareness,
here is the best I can do at an explanation.  There is an obvious
sense in which my body is not me.  You can cut off any piece of it
that leaves the rest functioning (alive and able to think) and the
piece that is cut off will not take part in any of my experiences,
while the rest of the body will still contain (be the center for?) my
self-awareness.  You may think that this is just because my brain is
in the big piece.  No, there is something more to it than that.  With
a little imagination you can picture an android being constructed
someday that has an AI brain that can be programmed with all the
memories you have now and all the same mental faculties.  Now picture
yourself observing the android and noting that it is an exact copy of
you.  You can then imagine actually BEING that android, seeing what
it sees, feeling what it feels.  What is the difference between
observing the android and being the android?  It is just this -- in
the latter case your self-awareness is centered in the android, while
in the former it is not.  That is what self-awareness, also called a
soul, is.  It is the one true meaning of the word "I", which does not
refer to any particular collection of atoms, but rather to the "you"
that is occupying the body.  This is not a religous issue either, so
back off, all you atheist and Christian fanatics.  I'm just calling
it a soul because it is the real "me", and I can imagine it residing
in various different bodies and machines, although I would, of
course, prefer some to others.

This, then, is the reason I would never step into one of those
teleporters that functions by ripping apart your atoms, then
reconstructing an exact copy at a distant site.  My self-awareness,
while it doesn't need a biological body to exist, needs something!
What guarantee do I have that "I", the "me" that sees and hears the
door of the transporter chamber clang shut, will actually be able to
find the new copy of my body when it is reconstructed three million
parsecs away.  Some of you are laughing at my lack of modernism here,
but I can have the last laugh if you're stupid enough to get into the
teleporter with me at the controls.  Suppose it functions like this
(from a real sci-fi story that I read): It scans your body, transmits
the copying information, then when it is certain that the copy got
through it zaps the old copy, to avoid the inconvenience of there
being two of you (a real mess at tax time!).  Now this doesn't bother
you a bit since it all happens in micro-seconds and your
self-awareness, being an illusion, is not to be consulted in the
matter.  But suppose I put your beliefs to the test by setting the
controls so that the copy is made but the original is not destroyed.
You get out of the teleporter at both ends, with the original you
thinking that something went wrong.  I greet you with:

"Hi there!  Don't worry, you got transported okay.  Here, you can
talk to your copy on the telephone to make sure.  The reason that I
didn't destroy this copy of you is because I thought you would enjoy
doing it yourself.  Not many people get to commit suicide and still
be around to talk about it at cocktail parties, eh?  Now, would you
like the hari-kari knife, the laser death ray, or the nice little red
pills?"

You, of course, would see no problem whatsoever with doing yourself
in on the spot, and would thank me for adding a little excitement to
your otherwise mundane trip.  Right?  What, you have a problem with
this scenario?  Oh, it doesn't bother you if only one copy of you
exists at a time, but if there are ever two, by some error, your
spouse is stuck with both of you?  What does the timing have to do
with your belief in self-awareness?  Relativity theory says that the
order of the two events is indeterminate anyway.

People who won't admit the reality of their own self-awareness have
always bothered me.  I'm not sure I want to go out for a beer with,
much less date or marry someone who doesn't at least claim to have
self-awareness (even if they're only faking).  I get this image of me
riding in a car with this non-self-aware person, when suddenly, as we
reach a curve with a huge semi coming in the other direction, they
fail to move the wheel to stay in the right lane, not seeing any
particular reason to attempt to extend their own unimportant
existence.  After all, if their awareness is just an illusion, the
implication is that they are really just a biological automaton and
it don't make no never mind what happens to it (or the one in the
next seat, for that matter, emitting the strange sounds and clutching
the dashboard).

The Big Unanswered Question then (which belongs in net.philosophy,
where I will expect to see the answer) is this:

                "Why do I have self-awareness?"

By this I do not mean, why does my body emit sounds that your body
interprets to be statements that my body is making about itself.  I
mean why am *I* here, and not just my body and brain?  You can't tell
me that I'm not, because I have a better vantage point than you do,
being me and not you.  I am the only one qualified to rule on the
issue, and I'll thank you to keep your opinion to yourself.  This
doesn't alter the fact that I find my existence (that is, the
existence of my awareness, not my physical support system), to be
rather arbitrary.  I feel that my body/brain combination could get
along just fine without it, and would not waste so much time reading
and writing windy news articles.

Enough of this, already, but I want to close by describing what
happened when I had this conversation with two good friends.  They
were refusing to agree to any of it, and I was starting to get a
little suspicious.  Only, half in jest, I tried explaining things
this way.  I said:

"Look, I know I'm in here, I can see myself seeing and hear myself
hearing, but I'm willing to admit that maybe you two aren't really
self-aware.  Maybe, in fact, you're robots, everybody is robots
except me.  There really is no Cornell University, or U.S.A. for that
matter.  It's all an elaborate production by some insidious showman
who constructs fake buildings and offices wherever I go and rips them
down behind me when I leave."

Whereupon a strange, unreadable look came over Dean's face, and he
called to someone I couldn't see, "Okay, jig's up! Cut! He figured it
out." (Hands motioning, now) "Get, those props out of here, tear down
those building fronts, ... "

Scared the pants off me.

Michael Condict   ...!cmcl2!csd1!condict
New York U.

------------------------------

End of AIList Digest
********************

∂12-Oct-83  1827	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #75
Received: from SRI-AI by SU-AI with TCP/SMTP; 12 Oct 83  18:26:51 PDT
Date: Wednesday, October 12, 1983 10:41AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #75
To: AIList@SRI-AI


AIList Digest           Thursday, 13 Oct 1983      Volume 1 : Issue 75

Today's Topics:
  Music & AI - Poll Results,
  Alert - September CACM,
  Fuzzy Logic - Zadeh Syllogism,
  Administrivia - Usenet Submissions & Seminar Notices,
  Seminars - HP 10/13/83 & Rutgers Colloquium
----------------------------------------------------------------------

Date: 11 Oct 83 16:16:12 EDT  (Tue)
From: Randy Trigg <randy%umcp-cs@UDel-Relay>
Subject: music poll results

Here are the results of my request for info on AI and music.
(I apologize for losing the header to the first mail below.)

                        - Randy
                   ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

Music in AI - find Art Wink formerly of U. of Pgh. Dept of info sci.
He had a real nice program to imitate Debuse (experts could not tell
its compositions from originals).

                   ------------------------------

Date:     22 Sep 83 01:55-EST (Thu)
From:     Michael Aramini <aramini@umass-cs>
Subject:  RE: AI and music

At the AAAI conference, I was talking to someone from Atari (from Atari
Cambridge Labs, I think) who was doing work with AI and music.  I can't
remember his name, however.  He was working (with others) on automating
transforming music of one genre into another.  This involved trying to
quasi-formally define what the characteristics of each genre of music are.
It sounded like they were doing a lot of work on defining ragtime and
converting ragtime to other genres.  He said there were other people at Atari
that are working on modeling the emotional state various characteristics of
music evoke in the listener.

I am sorry that I don't have more info as to the names of these people or how
to get in touch with them.  All that I know is that this work is being done
at Atari Labs either in Cambridge, MA or Palo Alto, CA.

                   ------------------------------

Date: Thu 22 Sep 83 11:04:22-EDT
From: Ted Markowitz <TJM@COLUMBIA-20>
Subject: Music and AI
Cc: TJM@COLUMBIA-20

Having an undergrad degree in music and working toward a graduate
degree in CS, I'm very interested in any results you get from your
posting. I've been toying with the idea of working on a music-AI
interface, but haven't pinned down anything specific yet. What
is your research concerned with?

--ted
                   ------------------------------

Date: 24 Sep 1983 20:27:57-PDT
From: Andy Cromarty <andy@aids-unix>
Subject: Music analysis/generation & AI

  There are 3 places that immediately come to mind:

1. There is a huge and well-developed (indeed, venerable) computer
music group at Stanford.  They currently occupy what used to be
the old AI Lab.  I'm sure someone else will mention them, but if
not call Stanford (or send me another note and I'll find a net address
you can send mail to for details.)

2. Atari Research is doing a lot of this sort of work -- generation,
analysis, etc., both in Cambridge (Mass) and Sunnyvale (Calif.), I
believe.

3. Some very good work has come out of MIT in the past few years.
David Levitt is working on his PhD in this area there, having completed
his masters in AI approaches to Jazz improvisation, if my memory serves,
and I think William Paseman also wrote his masters on a related topic
there.  Send mail to LEVITT@MIT-MC for info -- I'm sure he'd be happyy
to tell you more about his work.
                                                asc

------------------------------

Date: Wed 12 Oct 83 09:40:48-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Alert - September CACM

The September CACM contains the following interesting items:

A clever cover graphically illustrating the U.S. and Japanese
approaches to the Fifth Generation.

A Harper and Row ad (without prices) including Touretzky's
LISP: A Gentle Introduction to Symbolic Computation and
Eisenstadt and O'Shea's Artificial Intelligence: Tools,
Techniques and Applications.  [AIList would welcome reviews.]

An editorial by Peter J. Denning on the manifest destiny of
AI to succeed because the concept is easily grasped, credible,
expected to succeed, and seen as an improvement.

An introduction and three articles about the Fifth Generation,
Japanese management, the Japanese effort, and MCC.

A report on BELLE's slim victory in the 13th N.A. Computer Chess
Championship.

A note on the sublanguages (i.e., natural restricted languages)
conference at NYU next January.

A note on DOD's wholesale adoption of ADA.

                                        -- Ken Laws

------------------------------

Date: Wed 12 Oct 83 09:24:34-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Zadeh Syllogism

Lotfi Zadeh used a syllogism yesterday that was new to me.  To
paraphrase slightly:


    Cheap apartments are rare and highly sought.
    Rare and highly sought objects are expensive.
    ---------------------------------------------
    Cheap apartments are expensive.


I suppose any reasonable system will conclude that cheap apartments
cannot exist, which may in fact be the case.

                                        -- Ken Laws

------------------------------

Date: Wed 12 Oct 83 10:20:57-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Usenet Submissions

It has come to my attention that I may be failing to distribute
some Usenet-originated submissions back to Usenet readers.  If
this is true, I apologize.  I have not been simply ignoring
submissions; if you haven't heard from me, the item was distributed
to the Arpanet.

The problem involves the Article-I.D. field in Usenet-
originated messages.  The gateway software (maintained by
Knutsen@SRI-UNIX) ignores digest items containing this keyword
so that messages originating from net.ai will not be posted
back to net.ai.

Unfortunately, messages sent directly to AIList instead of to
net.ai also contain this keyword.  I have not been stripping it
out, and so the submission have not been making it back to Usenet.

I will try to be more careful in the future.  Direct AIList
contributors who want to be sure I don't slip should begin
their submissions with a "strip ID field" comment.  Even a
"Dear Moderator," might trigger my editing instincts.  I hope
to handle direct submissions correctly even without prompting,
but the visible distinction between the two message types is
slight.

                                        -- Ken Laws

------------------------------

Date: Wed 12 Oct 83 10:04:03-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Seminar Notices

There have been a couple of net.ai requests lately that seminar
notices be dropped, plus a strong request that they be
continued.  I would like to make a clear policy statement
on this matter.  Anyone who wishes to discuss it further
may write to AIList-Request@SRI-AI; I will attempt to
compile opinions or moderate the disscussion in a reasonable
manner.

Strictly speaking, AIList seldom prints "seminar notices".
Rather, it prints abstracts of AI-related talks.  The abstract
is the primary item; the fact that the speaker is graduating
or out "selling" is secondary; and the possibility that AIList
readers might attend is tertiary.  I try to distribute the
notices in a timely fashion, but responses to my original
query were two-to-one in favor of the abstracts even when the
talk had already been given.

The abstracts have been heavily weighted in favor of the
Bay Area; some readers have taken this to be provincialism.
Instead, it is simply the case that Stanford, Hewlett-Packard,
and occasionally SRI are the only sources available to me
that provide abstracts.  Other sources would be welcome.

In the event that too many abstracts become available, I will
institute rigorous screening criteria.  I do not feel the need
to do so at this time.  I have passed up database, math, and CS
abstracts because they are outside the general AI and data
analysis domain of AIList; others might disagree.  I have
included some borderline seminars because they were the first
of a series; I felt that the series itself was worth publicizing.

I can't please all of the people all of the time, but your feedback
is welcome to help me keep on course.  At present, I regard the
abstracts to be one of AIList's strengths.

                                        -- Ken Laws

------------------------------

Date: 11 Oct 83 16:30:27 PDT (Tuesday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium 10/13/83


                Piero P. Bonissone

                Corporate Research and Development
                General Electric Corporation

        DELTA: An Expert System for Troubleshooting
                Diesel Electric Locomotives


The a priori information available to the repair crew is a list of
"symptoms" reported by the engine crew.  More information can be
gathered in the "running repair" shop, by taking measurements and
performing tests provided that the two hour time limit is not exceeded.

A rule based expert system, DELTA (Diesel Electric Locomotive
Troubleshooting Aid) has been developed at the General Electric
Corporate Research and Development Laboratories to guide in the repair
of partially disabled electric locomotives.  The system enforces a
disciplined troubleshooting procedure which minimizes the cost and time
of the corrective maintenance allowing detection and repair of
malfunctions in the two hour window allotted to the service personnel in
charge of those tasks.

A prototype system has been implemented in FORTH, running on a Digital
Equipment VAX 11/780 under VMS, on a PDP 11/70 under RSX-11M, and on a
PDP 11/23 under RSX-11M.  This system contains approximately 550 rules,
partially representing the knowledge of a Senior Field Service Engineer.
The system is provided with graphical/video capabilities which can help
the user in locating and identifying locomotive components, as well as
illustrating repair procedures.

Although the system only contains a limited number of rules (550), it
covers, in a shallow manner, a wide breadth of the problem space.  The
number of rules will soon be raised to approximately 1200 to cover, with
increased depth, a larger portion of the problem space.

        Thursday, October 13, 1983  4:00 PM

        Hewlett Packard
        Stanford Division Labs
        5M Conference room
        1501 Page Mill Rd
        Palo Alto, CA  9430

        ** Be sure to arrive at the building's lobby ON TIME, so that you may
be escorted to the meeting room.

------------------------------

Date: 11 Oct 83 13:47:44 EDT
From: LOUNGO@RUTGERS.ARPA
Subject: colloquium

              [Reprinted from the RUTGERS bboard.  Long message.]



                  Computer Science Faculty Research Colloquia

                       Date: Thursday, October 13, 1983

                                Time: 2:00-4:15

                  Place: Room 705, Hill Center, Busch Campus

Schedule:

2:00-2:15       Prof. Saul Amarel, Chairman, Department of Computer Science
                Introductory Remarks

2:15-2:30       Prof. Casimir Kulikowski
                Title:   Expert Systems and their Applications
                Area(s): Artificial intelligence


2:30-2:45       Prof. Natesa Sridharan
                Title:   TAXMAN
                Area(s): Artificial intelligence (knowledge representation),
                         legal reasoning

2:45-3:00       Prof. Natesa Sridharan
                Title:   Artificial Intelligence and Parallelism
                Area(s): Artificial intelligence, parallelism

3:00-3:15       Prof. Saul Amarel
                Title:   Problem Reformulations and Expertise Acquisition;
                         Theory Formation
                Area(s): Artificial intelligence

3:15-3:30       Prof. Michael Grigoriadis
                Title:   Large Scale Mathematical Programming;
                         Network Optimization; Design of Computer Networks
                Area(s): Computer networks

3:30-3:45       Prof. Robert Vichnevetsky
                Title:   Numerical Solutions of Hyperbolic Equations
                Area(s): Numerical analysis

3:45-4:00       Prof. Martin Dowd
                Title:   P~=NP
                Area(s): Computational complexity

4:00-4:15       Prof. Ann Yasuhara
                Title:   Notions of Complexity for Trees, DAGS,
                                             *
                         and subsets of {0,1}
                Area(s): Computational complexity


                COFFEE AND DONUTS AT 1:30

-------
Mail-From: LAWS created at 12-Oct-83 09:11:56
Mail-From: LOUNGO created at 11-Oct-83 13:48:35
Date: 11 Oct 83 13:48:35 EDT
From: LOUNGO@RUTGERS.ARPA
Subject: colloquium
To: BBOARD@RUTGERS.ARPA
cc: pettY@RUTGERS.ARPA, lounGO@RUTGERS.ARPA
ReSent-date: Wed 12 Oct 83 09:11:56-PDT
ReSent-from: Ken Laws <Laws@SRI-AI.ARPA>
ReSent-to: ailist@SRI-AI.ARPA


                  Computer Science Faculty Research Colloquia

                        Date: Friday, October 14, 1983

                                Time: 2:00-4:15

                  Place: Room 705, Hill Center, Busch Campus

Schedule:

2:00-2:15       Prof. Tom Mitchell
                Title:   Machine Learning and Artificial Intelligence
                Area(s): Artificial intelligence

2:15-2:30       Prof. Louis Steinberg
                Title:   An Artificial Intelligence Approach to Computer-Aided
                         Design for VLSI
                Area(s): Artificial intelligence, computer-aided design, VLSI

2:30-2:45       Prof. Donald Smith
                Title:   Debugging VLSI Designs
                Area(s): Artificial intelligence, computer-aided design, VLSI

2:45-3:00       Prof. Apostolos Gerasoulis
                Title:   Numerical Solutions to Integral Equations
                Area(s): Numerical analysis

3:00-3:15       Prof. Alexander Borgida
                Title:   Applications of AI to Information Systems Development
                Area(s): Artificial intelligence, databases,
                         software engineering

3:15-3:30       Prof. Naftaly Minsky
                Title:   Programming Environments for Evolving Systems
                Area(s): Software engineeging, databases, artificial
                         intelligence

3:30-3:45       Prof. William Steiger
                title:   Random Algorithms
                area(s): Analysis of algorithms, numerical methods,
                         non-numerical methods

3:45-4:00

4:00-4:15
!


                  Computer Science Faculty Research Colloquia

                       Date: Thursday, October 20, 1983

                                Time: 2:00-4:15

                  Place: Room 705, Hill Center, Busch Campus

Schedule:

2:00-2:15       Prof. Thomaz Imielinski
                Title:   Relational Databases and AI; Logic Programming
                Area(s): Dabtabases, artificial intelligence

2:15-2:30       Prof. David Rozenshtein
                Title:   Nice Relational Databases
                Area(s): Databases, data models

2:30-2:45       Prof. Chitoor Srinivasan
                Title:   Expert Systems that Reason About Action with Time
                Area(s): Artificial intelligence, knowledge-based systems

2:45-3:00       Prof. Gerald Richter
                Title:   Numerical Solutions to Partial Differential Equations
                Area(s): Numerical analysis

3:00-3:15       Prof. Irving Rabinowitz
                Title:   - To be announced -
                Area(s): Programming languages

3:15-3:30       Prof. Saul Levy
                Title:   Distributed Computing
                Area(s): Computing, computer architecture

3:30-3:45       Prof. Yehoshua Perl
                Title:   Sorting Networks, Probabilistic Parallel Algorithms,
                         String Matching
                Area(s): Design and analysis of algorithms

3:45-4:00       Prof. Marvin Paull
                Title:   Algorithm Design
                Area(s): Design and analysis of algorithms

4:00-4:15       Prof. Barbara Ryder
                Title:   Incremental Data Flow Analysis
                Area(s): Design and analysis of algorithms,
                         compiler optimization

                COFFEE AND DONUTS AT 1:30

------------------------------

End of AIList Digest
********************

∂13-Oct-83  1804	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #76
Received: from SRI-AI by SU-AI with TCP/SMTP; 13 Oct 83  18:04:03 PDT
Date: Thursday, October 13, 1983 10:13AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #76
To: AIList@SRI-AI


AIList Digest           Thursday, 13 Oct 1983      Volume 1 : Issue 76

Today's Topics:
  Intelligent Front Ends - Request,
  Finance - IntelliGenetics,
  Fuzzy Logic - Zadeh's Paradox,
  Publication - Government Reviews
----------------------------------------------------------------------

Date: Thursday, 13-Oct-83  12:04:24-BST
From: BUNDY HPS (on ERCC DEC-10) <bundy@edxa>
Reply-to: bundy@rutgers.arpa
Subject: Request for Information on Intelligent Front Ends


        The UK government has set up a the Alvey Programme as the UK
answer to the Japanese 5th Generation Programme.  One part of that
Programme has been to identify and promote research in a number of
'themes'.  I am the manager of one such theme - on 'Intelligent Front
Ends' (IFE).  An IFE is defined as follows:

"A front end to an existing software package, for example a finite
element package, a mathematical modelling system, which provides a
user-friendly interface (a "human window") to packages which without
it, are too complex and/or technically incomprehensible to be
accessible to many potential users.  An intelligent front end builds a
model of the user's problem through user-oriented dialogue mechanisms
based on menus or quasi-natural language, which is then used to
generate suitably coded instructions for the package."

        One of the theme activities is to gather information about
IFEs, for instance:  useful references and short descriptions of
available tools.  If you can supply such information then please send it
to BUNDY@RUTGERS.  Thanks in advance.

                Alan Bundy

------------------------------

Date: 12 Oct 83  0313 PDT
From: Arthur Keller <ARK@SU-AI>
Subject: IntelliGenetics

                [Reprinted from the SU-SCORE bboard.]


From Tuesday's SF Chronicle (page 56):

"IntelliGenetics Inc., Palo Alto, has filed with the Securities and
Exchange Commission to sell 1.6 million common shares in late November.

The issue, co-managed by Ladenburg, Thalmann & Co. Inc. of New York
and Freehling & Co. of Chicago, will be priced between $6 and $7 a share.

IntelliGenetics provides artificial intelligence based software for use
in genetic engineering and other fields."

------------------------------

Date: Thursday, 13-Oct-83  16:00:01-BST
From: RICHARD HPS (on ERCC DEC-10) <okeefe.r.a.@edxa>
Reply-to: okeefe.r.a. <okeefe.r.a.%edxa@ucl-cs>
Subject: Zadeh's apartment paradox


The resolution of the paradox lies in realising that
        "cheap apartments are expensive"
is not contradictory.  "cheap" refers to the cost of
maintaining (rent, bus fares, repairs) the apartment
and "expensive" refers to the cost of procuring it.
The fully stated theorem is
        \/x apartment(x) & low(upkeep(x)) =>
            difficult←to←procure(x)
        \/x difficult←to←procure(x) =>
            high(cost←of←procuring(x))
hence   \/x apartment(x) & low(upkeep(x)) =>
            high(cost←of←procuring(x))
where "low" and "high" can be as fuzzy as you please.

A reasoning system should not conclude that cheap
flats don't exist, but rather that the axioms it has
been given are inconsistent with the assumption that
they do.  Sooner or later you are going to tell it
"Jones has a cheap flat", and then it will spot the
flawed axioms.


[I can see your point that one might pay a high price
to procure an apartment with a low rental.  There is
an alternate interpretation which I had in mind, however.
The paradox could have been stated in terms of any
bargain, specifically one in which upkeep is not a
factor.  One could conclude, for instance, that a cheap
meal is expensive.  My own resolution is that the term
"rare" (or "rare and highly sought") must be split into
subconcepts corresponding to the cause of rarity.  When
discussing economics, one must always reason separately
about economic rarities such as rare bargains.  The second
assertion in the syllogism then becomes "rare and highly
sought objects other than rare bargains are (Zadeh might
add 'usually') expensive", or "rare and highly sought
objects are either expensive or are bargains".

                                        -- Ken Laws ]

------------------------------

Date: Thu 13 Oct 83 03:38:21-CDT
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: Re: Zadeh Syllogism

        Expensive apartments are not highly sought.
        Items not in demand are cheap.
                -> expensive apartments are cheap.

or      The higher the price, the lower the demand.
        The lower the demand, the lower the price.
                -> the higher the price , the lower the price.

ergo ??         garbage in , garbage out!

Why am I thinking of Reagonomics right now ????

Werner (UUCP:   { ut-sally , ut-ngp }   !utastro!werner
        ARPA:   werner@utexas-20

PS:     at this time of the day, one gets the urge to voice "weird" stuff ...
                               -------

[The first form is as persuasive as the original syllogism.
The second seems to be no more than a statement of negative
feedback.  Whether the system is stable depends on the nature
of the implied driving forces.  It seems we are now dealing
with a temporal logic.

An example of an unstable system is:

    The fewer items sold, the higher the unit price must be.
    The higher the price, the fewer the items sold.
    --------------------------------------------------------
    Bankruptcy.

-- KIL]

------------------------------

Date: Wed, 12 Oct 83 13:16 PDT
From: GMEREDITH.ES@PARC-MAXC.ARPA
Subject: Sensitivity Issue and Self-Awareness


I can understand the concern of researcher people about censorship.

However, having worked with an agency which spent time extracting
information of a classified nature from unclassified or semi-secure
sources, I have to say that people not trained in such pursuits are
usually very poor judges of the difference between necessary efforts to
curb flow of classified information and "censorship".

I can also guarantee that this country's government is not the alone in
knowing how to misuse the results of research carried out with the most
noble of intents.



Next, to the subject of self-awareness.  The tendency of an individual
to see his/her corporal self as distinct from the *I* experience or to
see others as robots or a kind of illusion is sufficient to win a tag of
'schizophrenic' from any psychiatrist and various other negative
reactions from those involved in other schools of the psychological
community.

Beyond that, the above tendencies make relating to 'real' world
phenomena very difficult.   That semi coming around the curve will
continue to follow through on the illusion of having smashed those just
recently discontinued illusions in the on-coming car.

Guy

------------------------------

Date: Wed 12 Oct 83 00:07:15-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Goverment Reviews of Basic Research

    I must disagree with Frank Adrian who commented in a previous digest
that "I urge everyone to boycott this conference" and other conferences with
this requirement. The progress of science should not be halted due to some
government ruling, especially since an attempted boycott would have little
positive and (probably) much negative effect. Assuming that all of the
'upstanding' scientists participated, is there any reason to think that
the government couldn't find less discerning researchers more than happy to
accept grant money?

    Eric (sorry, no last name) is preoccupied with the fact that government
'paid' for the research; aren't "we" the people the real owners, in that case?
Or can there be real owners of basic knowledge: as I recall, the patent office
has ruled that algorithms are unpatentable and thus inherently public domain.
The control of ideas has been an elusive goal for many governments, but even so,
it is rare for a government to try to claim ownership of an idea as a
justification for restriction; outside of the military domain, this is seems
to be a new one...

        As a scientist, I believe that the world and humanity will gain wisdom
and insight though research, and eventually enable us to end war, hunger,
ignorance, whatever. Other forces in the world have different, more short-term
goals, for our work; this is fine, as long as the long-term reasons for
scientific research are not sacrificed. Sure, they 'paid' for the results of
our short-term goals, but we should never allow that to blind us to the real
reason for working in AI, and *NO-ONE* can own that.

   So I'll take government money (if they offer me any after this diatribe!)
and work on various systems and schemes, but I'll fight any attempt to
nullify the long term goals I'm really working for. I feel these new
restrictions are detrimental to the long-term goals of scientific search,
but currently, I'm going with things here... we're the best in the world (sigh)
and I plan on fighting to keep it that way.

David Rogers
DRogers@SUMEX-AIM.ARPA

------------------------------

Date: Wed, 12 Oct 83 10:26:28 EDT
From: Morton A. Hirschberg <mort@brl-bmd>
Subject: Flaming Mad

  I have refrained  from  reflaming  since  I  sent  the  initial
conference  announcement  on  "Intelligent Systems and Machines."
First,  the  conference  is  not  being  sponsored  by   the   US
Government.   Second,  many  papers  may  be  submitted  by those
affected by the security  release  and  it  seemed  necessary  to
include  this as part of the announcement.  Third, I attended the
conference at Oakland earlier  this  year  and  it  was  a  super
conference.  Fourth, you may bite your nose to spite your face if
you as an individual do not want to submit a paper or attend  but
you are not doing much service to those sponsoring the conference
who are true scientists by urging boycotts.  Finally, below is  a
little of my own philosophy.

  I have rarely  seen  science  or  the  application  of  science
(engineering)  benefit anyone anywhere without an associated cost
(often called an investment).  The costs are usually borne by the
investors  and  if  the  end  product is a success then costs are
passed  on  to  consumers.   I  can  find  few   examples   where
discoveries  in  science  or  in  the  name  of  science have not
benefited the discoverer and/or  his  heirs,  or  the  investors.
Many  of  our  early discoveries were made by men of considerable
wealth who could dally with theory and experimentation  (and  the
arts)  and science using their own resources.  We may have gained
a heritage but they gained a profit.

  What seems to constitute a common heritage is either  something
that  has been around for so long that it is either in the public
domain or is a  romanticized  fiction  (e.g.  Paul  Muni  playing
Pasteur).   Simultaneous  discovery has been responsible for many
theories being in  the  public  domain  as  well  as  leading  to
products  which were hotly contested in lawsuits.  (e.g. did Bell
really invent the telephone or Edison the movie camera?)

  Watson in his book "The Double Helix" gives a clear picture  of
what  a typical scientist may really be and it is not Arrowsmith.
I did not see Watson refuse his Noble because the radiologist did
not get a prize.

  Government, and here for historical reasons we must also include
state  and  church, has  always had a role in the sciences.  That
role is one that governments can not always be proud of (Galileo,
Rachael Carson, Sakharov).

  The manner in  which  the  United  States  Government  conducts
business  gives  great  latitude  to scientists and to investors.
When the US Government buys something it should be theirs just as
when  you as an individual buy something.  As such it is then the
purview of the US Government as to what to do with  the  product.
Note  the  US  Government  often  buys  with  limited  rights  of
ownership and distribution.

  It has been my observation having worked in  private  industry,
for a university, and now for the government that relations among
the three has not been optimal and in  many  cases  not  mutually
rewarding.   This  is  a  great  concern  of  mine and many of my
colleagues.  I would like a role in changing relations among  the
three  and do work toward that as a personal goal.  This includes
not  referring  to  academicians  as  eggheads   or   charlatans;
industrialists  as grubby profiteers; and government employees as
empty-headed bureaucrats.

  I recommend that young flamers try to maintain a little naivete
as they mature but not so much that they are ignorant of reality.

  Every institution has its structure and by in large  one  works
within  the  structure to earn a living or are free to move on or
can work to change that structure.  One possible  change  is  for
the US Government to conduct business the way the the Japanese do
(at least in certain cases).  Maybe AI is the place to start.

  I also notice that mail on the net comes  across  much  harsher
than  it  is  intended  to  be.  This can be overcome by being as
polite as possible and being more verbose.  In addition, one  can
read their mail more than once before flaming.

                                Mort

------------------------------

End of AIList Digest
********************

∂14-Oct-83  1545	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #77
Received: from SRI-AI by SU-AI with TCP/SMTP; 14 Oct 83  15:44:18 PDT
Date: Friday, October 14, 1983 9:36AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #77
To: AIList@SRI-AI


AIList Digest            Friday, 14 Oct 1983       Volume 1 : Issue 77

Today's Topics:
  Natural Language - Semantic Chart Parsing & Macaroni & Grammars,
  Games - Rog-O-Matic,
  Seminar - Nau at UMaryland, Diagnostic Problem Solving
----------------------------------------------------------------------

Date: Wednesday, 12 October 1983 14:01:50 EDT
From: Robert.Frederking@CMU-CS-CAD
Subject: "Semantic chart parsing"

        I should have made it clear in my previous note on the subject that
the phrase "semantic chart parsing" is a name I've coined to describe a
parser which uses the technique of syntactic chart parsing, but includes
semantic information right from the start.  In a way, it's an attempt to
reconcile Schank-style immediate semantic interpretation with syntactically
oriented parsing, since both sources of information seem worthwhile.

------------------------------

Date: Wednesday, 12-Oct-83  17:52:33-BST
From: RICHARD HPS (on ERCC DEC-10) <okeefe.r.a.@edxa>
Reply-to: okeefe.r.a. <okeefe.r.a.%edxa@ucl-cs>
Subject: Natural Language


There was rather more inflammation than information in the
exchanges between Dr Pereira and Whats-His-Name-Who-Butchers-
Leprechauns.  Possibly it's because I've only read one or two
[well, to be perfectly honest, three] papers on PHRAN and the
others in that PHamily, but I still can't see why it is that
their data structures aren't a grammar.  Admittedly they don't
look much like rules in an XG, but then rules in an XG don't
look much like an ATN either, and no-one has qualms about
calling ATNs grammars.  Can someone please explain in words
suitable for a 16-year-old child what makes phrasal analysis
so different from
        XGs (Extraposition grammars, include DCGS in this)
        ATNs
        Marcus-style parsers
        template-matching
so different that it is hailed as "solving" the parsing problem?
I have written grammars for tiny fragments of English in DCG,
ATN, and PIDGIN -styles [the adverbs get me every time].  I am not
a linguist, and the coverage of these grammars was ludicrously
small.  So my claim that I found it vastly easier to extend and
debug the DCG version [DCGs are very like EAGs] will probably be
dismissed with the contempt it deserves.  Dr Pereira has published
his parser, and in other papers has published an XG interpreter.
I believe a micro-PHRAN has been published, and I would be grateful
for a pointer to it.  Has anyone published a phrasal-analysis
grimoire (if the term "grammar" doesn't suit) with say >100 "things"
(I forget the right name for the data structures), and how can I
get a copy?

     People certainly can accept ill-formed sentences.  But they DO
have quite definite notions of what is a well-formed sentence and
what is not.  I was recently in a London Underground station, and
saw a Telecom poster.  It was perfectly obvious that it was written
by an Englishman trying to write in American.  It finally dawned on
me that he was using American vocabulary and English syntax.  At
first sight the poster read easily enough, and the meaning came through.
But it was sufficiently strange to retain my attention until I saw what
was odd about it.  Our judgements of grammaticality are as sensitive as
that.  [I repeat, I am no linguist.  I once came away from a talk by
Gazdar saying to one of my fellow students, who was writing a parser:
"This extraposition, I don't believe people do that."]  I suggest that
people DO learn grammars, and what is more, they learn them in a form
that is not wholly unlike [note the caution] DCGs or ATNs.  We know that
DCGs are learnable, given positive and negative instances.  [Oh yes,
before someone jumps up and down and says that children don't get
negative instances, that is utter rubbish.  When a child says something
and is corrected by an adult, is that not a negative instance?  Of course
it is!]  However, when people APPLY grammars for parsing, I suggest that
they use repair methods to match what they hear against what they
expect.  [This is probably frames again.]  These repair methods range
all the way from subconscious signal cleaning [coping with say a lisp]
to fully conscious attempts to handle "Colourless Green ideas sleep
furiously".  [Maybe parentheses like this are handled by a repair
mechanism?]  If this is granted, some of the complexity required to
handle say ellipsis would move out of the grammar and into the repair
mechanisms.  But if there is anything we know about human psychology,
it is that people DO have repair mechanisms.  There is a lot of work
on how children learn mathematics [not just Brown & co], and it turns
out that children will go to extraordinary lengths to patch a buggy
hack rather than admit they don't know.  So the fact that people can
cope with ungrammatical sentences is not evidence against grammars.

     As evidence FOR grammars, I would like to offer Macaroni.  Not
the comestible, the verse form.  Strictly speaking, Macaroni is a
mixture of the vernacular and Latin, but since it is no longer
popular we can allow any mixture of languages.  The odd thing about
Macaroni is that people can judge it grammatical or ungrammatical,
and what is more, can agree about their judgements as well as they
can agree about the vernacular or Latin taken separately.  My Latin
is so rusty there is no iron left, so here is something else.

        [Prolog is] [ho protos logos] [en programmation logiciel]
        English     Greek               French

This of course is (NP copula NP) PP, which is admissible in all
three languages, and the individual chunks are well-formed in their
several languages.  The main thing about Macaroni is that when
two languages have a very similar syntactic class, such as NP,
a sentence which starts off in one language may rewrite that
category in the other language, and someone who speaks both languages
will judge it acceptable.  Other ways of dividing up the sentence are
not judged acceptable, e.g.

        Prolog estin ho protos mot en logic programmation

is just silly.  S is very similar in most languages, which would account
for the acceptability of complete sentences in another language.  N is
pretty similar too, and we feel no real difficulty with single isolated
words from other languages like "chutzpa" or "pyjama" or "mana".  When
the syntactic classes are not such a good match, we feel rather more
uneasy about the mixture.  For example, "[ka ora] [teenei tangata]"
and "[these men] [are well]" both say much the same thing, but because
the Maaori nominal phrase and the English noun phrase aren't all that
similar, "[teenei tangata] [are well]" seems strained.

     The fact that bilingual people have little or no difficulty with
Macaroni is just as much a fact as the fact the people in general have
little difficulty with mildly malformed sentences.  Maybe they're the
same fact.  But I think the former deserves as much attention as the
latter.
     Does anyone have a parser with a grammar for English and a grammar
for [UK -> French or German; Canada -> French; USA -> Spanish] which use
the same categories as far as possible?  Have a go at putting the two
together, and try it on some Macaroni.  I suspect that if you have some
genuinely bilingual speakers to assist you, you will find it easier to
develo/correc the grammars together than separately.  [This does not
hold for non-related languages.  I would not expect English and Japanese
to mix well, but then I don't know any Japanese.  Maybe it's worth trying.]

------------------------------

Date: Thu 13 Oct 83 11:07:26-PDT
From: WYLAND@SRI-KL.ARPA
Subject: Dave Curry's request for a Simple English Grammer

        I think the book "Natural Language Information
Processing" by Naomi Sager (Addison-Wesley, 1981) may be useful.
This book represents the results of the Linguistic String project
at New York University, and Dr. Sager is its director.  The book
contains a BNF grammer set of 400 or so rules for parsing English
sentences.  It has been applied to medical text, such as
radiology reports and narrative documents in patient records.

Dave Wyland
WYLAND@SRI

------------------------------

Date: 11 Oct 83 19:41:39-PDT (Tue)
From: harpo!utah-cs!shebs @ Ucb-Vax
Subject: Re: WANTED: Simple English Grammar - (nf)
Article-I.D.: utah-cs.1994

(Oh no, here he goes again! and with his water-cooled keyboard too!)

Yes, analysis of syntax alone cannot possibly work - as near as I can
tell, syntax-based parsers need an enormous amount of semantic processing,
which seems to be dismissed as "just pragmatics" or whatever.  I'm
not an "in" member of the NLP community, so I haven't been able to
find out the facts, but I have a bad feeling that some of the well-known
NLP systems are gigantic hacks, whose syntactic analyzer is just a bag
hanging off the side, but about which all the papers are written.  Mind
you, this is just a suspicion, and I welcome any disproof...

                                                stan the l.h.
                                                utah-cs!shebs

------------------------------

Date: 7 Oct 83 9:54:21-PDT (Fri)
From: decvax!linus!vaxine!wjh12!foxvax1!brunix!rayssd!asa @ Ucb-Vax
Subject: Re: WANTED: Simple English Grammar - (nf)
Article-I.D.: rayssd.187

date: 10/7/83

        Yesterday I sent a suggestion that you look at Winograd's
new book on syntax.  Upon reflection, I realized that there are
several aspects of syntax not clearly stated therein. In particular,
there is one aspect which you might wish to think about, if you
are interested in building models and using the 'expectations'
approach. This aspect has to do with the synergism of syntax and
semantics. The particular case which occured to me is an example
of the specific ways that Latin grammar terminology is innapropriate
for English. In English, there is no 'present' tense in the intuitive
sense of that word. The stem of the verb (which Winograd calls the
'infinitive' form, in contrast to the traditional use of this term to
signify the 'to+stem' form) actually encodes the semantic concept
of 'indefinite habitual' Thus, to say only 'I eat.' sounds
peculiar. When the stem is used alone, we expect a qualifier, as in
'I eat regularly', or 'I eat very little', or 'I eat every day'. In
this framework, there is a connection with the present, in the sense
that the process described is continuous, has existed in the past,
and is expected to continue in the future. Thus, what we call the
'present' is really a 'modal' form, and might better be described
as the 'present state of a continuing habitual process'. If we wish
to describe something related to our actual state at this time,
we use what I think of as the 'actual present', which is 'I am eating'.
Winograd hints at this, especially in Appendix B, in discussing verb
forms. However, he does not go into it in detail, so it might help
you understand better what's happening if you keep in mind the fact
that there exist specific underlying semantic functions being
implemented, which are in turn based on the ltype of information
to be conveyed and the subtlety of the disinctions desired. Knowing
this at the outset may help you decide the elements you wish to
model in a simplified program. It will certainly help if you
want to try the expectations technique. This is an ideal situation
in which to try a 'blackboard' type of expert system, where the
sensing, semantics, and parsing/generation engines operate in
parallel. Good luck!

        A final note: if you would like to explore further a view
of grammar which totally dispenses with the terms and concepts of
Latin grammar, you might read "The Languages of Africa" (I think
that's the title), by William Welmer.

        By the way! Does anyone out there know if Welmer ever published
his fascinating work on the memory of colors as a function of time?
Did it at least get stored in the archives at Berkeley?

Asa Simmons
rayssd!asa

------------------------------

Date: Thursday, 13 October 1983 22:24:18 EDT
From: Michael.Mauldin@CMU-CS-CAD
Subject: Total Winner


        @   @          @   @           @          @@@  @     @
        @   @          @@ @@           @           @   @     @
        @   @  @@@     @ @ @  @@@   @@@@  @@@      @  @@@    @
        @@@@@ @   @    @   @     @ @   @ @   @     @   @     @
        @   @ @@@@@    @   @  @@@@ @   @ @@@@@     @   @     @
        @   @ @        @   @ @   @ @   @ @         @   @  @
        @   @  @@@     @   @  @@@@  @@@@  @@@     @@@   @@   @


Well, thanks to the modern miracles of parallel processing (i.e. using
the UUCPNet as one giant distributed processor)  Rog-O-Matic became an
honest member of the Fighter's guild on October 10, 1983.  This is the
fourth total victory for our Heuristic Hero, but the first time he has
done so without using a "Magic Arrow".  This comes only a year and two
weeks  after  his  first  total  victory.  He will be two years old on
October 19.  Happy Birthday!

Damon Permezel of Waterloo was the lucky user. Here is his announcement:

    - - - - - - - -
    Date: Mon, 10 Oct 83 20:35:22 PDT
    From: allegra!watmath!dapermezel@Berkeley
    Subject: total winner
    To: mauldin@cmu-cs-a

    It won!  The  lucky  SOB started out with armour class of 1 and a (-1,0)
    two handed sword (found right next to it on level 1).  Numerous 'enchant
    armour' scrolls  were found,  as well as a +2 ring of dexterity,  +1 add
    strength, and slow digestion, not to mention +1 protection.  Luck had an
    important part to play,  as  initial  confrontations  with 'U's  got him
    confused and almost killed, but for the timely stumbling onto the stairs
    (while still confused). A scroll of teleportation was seen to be used to
    advantage once, while it was pinned between 2 'X's in a corridor.
    - - - - - - - -
    Date: Thu, 13 Oct 83 10:58:26 PDT
    From: allegra!watmath!dapermezel@Berkeley
    To: mlm@cmu-cs-cad.ARPA
    Subject: log

    Unfortunately, I was not logging it. I did make sure that there
    were several witnesses to the game, who could verify that it (It?)
    was a total winner.
    - - - - - - - -

The paper is still available; for a copy of "Rog-O-Matic: A Belligerent
Expert System", please send your physical address to "Mauldin@CMU-CS-A"
and include the phrase "paper request" in the subject line.

Michael Mauldin (Fuzzy)
Department of Computer Science
Carnegie-Mellon University
Pittsburgh, PA  15213
(412) 578-3065,  mauldin@cmu-cs-a.

------------------------------

Date: 13 Oct 83 21:35:12 EDT  (Thu)
From: Dana S. Nau <dsn%umcp-cs@CSNet-Relay>
Subject: University of Maryland Colloquium

University of Maryland
Department of Computer Science
Colloquium

Monday, October 24 -- 4:00 PM
Room 2324 - Computer Science Building


             A Formal Model of Diagnostic Problem Solving


                             Dana S. Nau
                        Computer Science Dept.
                        University of Maryland
                          College Park, Md.


      Most expert computer systems are based on production rules, and to
some readers the terms "expert computer system" and "production rule
system" may seem almost synonymous.  However, there are problem domains
for which the usual production rule techniques appear to be inadequate.

      This talk presents a useful alternative to rule-based problem
solving:  a formal model of diagnostic problem solving based on a
generalization of the set covering problem, and formalized algorithms
for diagnostic problem solving based on this model.  The model and the
resulting algorithms have the following features:
(1) they capture several intuitively plausible features of human
    diagnostic inference;
(2) they directly address the issue of multiple simultaneous causative
    disorders;
(3) they can serve as a basis for expert systems for diagnostic problem
    solving; and
(4) they provide a conceptual framework within which to view recent
    work on diagnostic problem solving in general.

Coffee and refreshments - Rm. 3316 - 3:30
------------------------------

End of AIList Digest
********************

∂14-Oct-83  2049	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #78
Received: from SRI-AI by SU-AI with TCP/SMTP; 14 Oct 83  20:49:25 PDT
Date: Friday, October 14, 1983 2:25PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #78
To: AIList@SRI-AI


AIList Digest           Saturday, 15 Oct 1983      Volume 1 : Issue 78

Today's Topics:
  Philosophy - Dedekind & Introspection,
  Rational Psychology - Conectionist Models,
  Creativity - Intuition in Physics,
  Conference - Forth,
  Seminar - IUS Presentation
----------------------------------------------------------------------

Date: 10 Oct 83 11:54:07-PDT (Mon)
From: decvax!duke!unc!mcnc!ncsu!uvacs!mac @ Ucb-Vax
Subject: consciousness, loops, halting problem
Article-I.D.: uvacs.983


With regard to loops and consciousness, consider Theorem 66 of Dedekind's
book on the foundations of mathematics, "Essays on the Theory of Numbers",
translated 1901.  This is the book where the Dedekind Cut is invented to
characterize irrational numbers.

        64.  Definition.  A system S is said to be infinite when it
        is similar to a proper part of itself; in the contrary case
        S is said to be a finite system.


        66.  Theorem.  There exist infinite systems.  Proof.  My own
        realm of thoughts, i.e. the totality S of all things, which
        can be objects of my thought, is infinite.  For if s
        signifies an element of S, then is the thought s', that s
        can be object of my thought, itself an element of S.  If we
        regard this as transform phi(s) of the element s then has
        the transformation phi of S, thus determined, the property
        that the transform S' is part of S; and S' is certainly
        proper part of S, because there are elements of S (e.g. my
        own ego) which are different from such thought s' and
        therefore are not contained in S'.  Finally it is clear that
        if a, b are different elements of S, their transformation
        phi is a distinct (similar) transformation.  Hence S is
        infinite, which was to be proved.

For that matter, net.math seems to be in a loop.  They were discussing the
Banach-Tarski paradox about a year ago.

Alex Colvin

ARPA: mac.uvacs@UDel-Relay CS: mac@virginia USE: ...uvacs!mac

------------------------------

Date: 8 Oct 83 13:53:38-PDT (Sat)
From: hplabs!hao!seismo!rochester!blenko @ Ucb-Vax
Subject: Re: life is but a dream
Article-I.D.: rocheste.3318

The statement that consciousness is an illusion does not mean it does
not or cannot have a concrete realization. I took the remarks to mean
simply that the entire mental machinery is not available for
introspection, and in its place some top-level "picture" of the process
is made available. The picture need not reflect the details of internal
processing, in the same way that most people's view of a car does not
bear much resemblance to its actual mechanistic internals.

For those who may not already be aware, the proposal is not a new one.
I find it rather attractive, admitting my own favorable
predisposition towards the proposition that mental processing is
computational.

I still think this newsgroup would be more worthwhile if readers
adopted a more tolerant attitude. It seems to be the case that there is
nearly always a silly interpretation of someone's contribution;
discovering that interpretation doesn't seem to be a very challenging
task.

        Tom Blenko
        blenko@rochester
        decvax!seismo!rochester!blenko
        allegra!rochester!blenko

------------------------------

Date: 11 Oct 83 9:37:52-PDT (Tue)
From: hplabs!hao!seismo!rochester!gary @ Ucb-Vax
Subject: Re: "Rational Psychology"
Article-I.D.: rocheste.3352

This is in response to John Black's comments, to wit:

>     Having a theoretical (or "rational" -- terrible name with all the wrong
> connotations) psychology is certainly desirable, but it does have to make
> some contact with the field it is a theory of.  One of the problems here is
> that the "calculus" of psychology has yet to be invented, so we don't have
> the tools we need for the "Newtonian mechanics" of psychology.  The latest
> mathematical candidate was catastrophe theory, but it turned out to be a
> catastrophe when applied to human behavior.  Perhaps Periera and Doyle have
> a "calculus" to offer.

This is an issue I (and I think many AI'ers) are particularly interested in,
that is, the correspondence between our programs and the actual workings of
the mind. I believe that an *explanatory* theory of behavior will not be at
the functional level of correspondence with human behavior. Theories which are
at the functional level are important for pinpointing *what* it is that people
do, but they don't get a handle on *how* they do it. And, I think there are
side-effects of the architecture of the brain on behavior that do not show up
in functional level models.

This is why I favor (my favorite model!) connectionist models as being a
possible "calculus of Psychology". Connectionist models, for those unfamiliar
with the term, are a version of neural network models developed here at
Rochester (with related models at UCSD and CMU) that attempts to bring the
basic model unit into line with our current understanding of the information
processing capabilities of neurons. The units themselves are relatively stupid
and slow, but have state, and can compute simple functions (not restricted to
linear). The simplicity of the functions is limited only by "gentleman's
agreement", as we still really have no idea of the upper limit of neuronal
capabilities, and we are guided by what we seem to need in order to accomplish
whatever task we set them to. The payoff is that they are highly connected to
one another, and can compute in parallel. They are not allowed to pass symbol
structures around, and have their output restricted to values in the range
1..10. Thus we feel that they are most likely to match the brain in power.

The problem is how to compute with the things! We regard the outcome of a
computation to be a "stable coalition", a set of units which mutually
reinforce one another. We use units themselves to represent values of
parameters of interest, so that mutually compatible values reinforce one
another, and mutually exclusive values inhibit one another. These could
be the senses of the words in a sentence, the color of a patch in the
visual field, or the direction of intended eye movement. The result is
something that looks a lot like constraint relaxation.

Anyway, I don't want to go on forever. If this sparks discussion or interest
references are available from the U. of R. CS Dept. Rochester, NY 14627.
(the biblio. is a TR called "the Rochester Connectionist Papers").

gary cottrell   (allegra or seismo)!rochester!gary or gary@rochester

------------------------------

Date: 10 Oct 83 8:00:59-PDT (Mon)
From: harpo!eagle!mhuxi!mhuxj!mhuxl!mhuxm!pyuxi!pyuxn!rlr @ Ucb-Vax
Subject: Re: RE: Intuition in Physics
Article-I.D.: pyuxn.289

>    I presume that at birth, ones mind is not predisposed to one or another
>    of several possible theories of heavy molecule collision (for example.)
>    Further, I think it unlikely that personal or emotional interaction in
>    one "pre-analytic" stage (see anything about developmental psych.) is
>    is likely to bear upon ones opinions about those molecules. In fact I
>    find it hard to believe that anything BUT technical learning is likely
>    to bear on ones intuition about the molecules. One might want to argue
>    that ones personality might force you to lean towards "aggressive" or
>    overly complex theories, but I doubt that such effects will lead to
>    the creation of a theory.  Only a rather mild predisposition at best.

>    In psychology it is entirely different.  A person who is agresive has
>    lots of reasons to assume everyone else is as well. Or paranoid, or
>    that rote learning is esp good or bad, or that large dogs are dangerous
>    or a number of other things that bear directly on ones theories of the
>    mind.  And these biases are aquired from the process of living and are
>    quite un-avoidable.

The author believes that, though behavior patterns and experiences in a
person's life may affect their viewpoint in psychological studies, this
does not apply in "technical sciences" (not the author's phrasing, and not
mine either---I just can't think of another term) like physics.  It would
seem that flashes of "insight" obtained by anyone in a field involving
discovery have to be based on both the technical knowledge that the person
already has AND the entire life experience up to that point.  To oversimplify,
if one has never seen a specific living entity (a flower, a specific animal)
or witnessed a physical event, or participated in a particular human
interaction, one cannot base a proposed scientific model on these things, and
these flashes are often based on such analogies to reality.

------------------------------

Date: 9 Oct 83 14:38:45-PDT (Sun)
From: decvax!genrad!security!linus!utzoo!utcsrgv!utcsstat!laura @
      Ucb-Vax
Subject: Re: RE: Intuition in Physics
Article-I.D.: utcsstat.1251

Gary,
I don't know about why you think about physics, but I know something about
why *I* think about physics. You see, i have this deep fondness for
"continuous creation" as opposed to "the big bang". This is too bad for me,
since "big bang" appears to be correct, or at any rate, "continuous
creation" appears to be *wrong*. Perhaps what it more correct is
"bang! sproiinngg.... bang!" or a series of bangs, but this is not
the issue.

these days, if you ask me to explain the origins of the universe, from
a physical point of veiw I am going to discuss "big bang". I can do this.
It just does not have the same emotional satisfaction to me as "c c"
but that is too bad for me, I do not go around spreading antiquidated
theories to people who ask me in good faith for information.

But what if the evidence were not all in yet? What if there were an
equal number of reasons to believe one or the other? What would I be
doing? talking about continuous creation. i might add a footnote that
there was "this other theory ... the big bang theory" but I would not
discuss it much. I have that strong an emotional attatchment to
"continuous creation".

You can also read that other great issues in physics and astronomy had
their great believers -- there were the great "wave versus particle"
theories of light, and The Tycho Brahe cosmology versus the Kepler
cosmology, and these days you get similar arguments ...

In 50 years, we may all look back and say, well, how silly, everyone
should have seen that X, since X is now patently obvious. This will
explain why people believe X now, but not why people believed X then,
or why people DIDN'T believe X then.

Why didn't Tycho Brahe come up with Kepler's theories? It wasn't
that Kepler was a better experiementer, for Kepler himself admits
that he was a lousy experimenter and Brahe was reknowned for having
the best instraments in the world, and being the most painstaking
in measurements. it wasn't that they did not know each other, for
Kepler worked with Brahe, and replaced him as Royal Astronomer, and
was familiar with his work before he ever met Brahe...

It wasn't that Brahe was religious and Kepler was not, for it was
Kepler that was almost made a minister and studied very hard in Church
schools (which literally brought him out of peasantry into the middle
class) while Brahe, the rich nobleman, could get away with acts that
the church frowned upon (to put if mildly).

Yet Kepler was able to think in terms of Heliocentric, while Brahe,
who came so...so..close balked at the idea and put the sun circling
the earth while all the other planets circled the sun. Absolutely
astonishing!

I do not know where these differences came from. However, I have a
pretty good idea why continuous creation is more emotionally satisfying
for me than "big bang" (though these days I am getting to like
"bang! sproing! bang!" as well.) As a child, i ran across the "c c"
theory at the same time as i ran across all sorts of the things that
interest me to this day. In particular, I recall reading it at the
same time that I was doing a long study of myths, or creation myths
in particular. Certain myths appealed to me, and certain ones did not.

In particular, the myths that centred around the Judeao-Christian
tradition (the one god created the world -- boom!) had almost no
appeal to me those days, since I had utter and extreme loathing for
the god in question. (this in turn was based on the discovery that
this same wonderful god was the one that tortured and burned millions
in his name for the great sin of heresy.) And thus, "big bang"
which smacked of "poof! god created" was much less favoured by me
at age 8 than continuous creation (no creator necessary).

Now that I am older, I have a lot more tolerance for Yaveh, and
I do not find it intollerable to believe in the Big Bang. However,
it is not as satisfying.  Thus I know that some of my beliefs
which in another time could have been essential to my scientific
theories and inspirations, are based on an 8-year-old me reading
about the witchcraft trials.

It seems likely that somebody out there is furthering science by
discovering new theories based on ideas which are equally scientific.

Laura Creighton
utzoo!utcsstat!laura

------------------------------

Date: Fri 14 Oct 83 10:50:52-PDT
From: WYLAND@SRI-KL.ARPA
Subject: FORTH CONVENTION ANNOUNCEMENT

              5TH ANNUAL FORTH NATIONAL CONVENTION

                       October 14-15, 1983


                         Hyatt Palo Alto

                       4920 El Camino real
                       Palo Alto, CA 94306



        Friday   10/14: 12:00-5:00  Conference and Exhibits
        Saturday 10/15:  9:00-5:00  Conference and Exhibits
                              7:00  Banquet and Speakers


        This FORTH convention includes sessions on:

        Relational Data Base Software - an implementation
        FORTH Based Instruments - implementations
        FORTH Based Expert Systems - GE DELTA system
        FORTH Based CAD system - an implementation
        FORTH Machines - hardware implementations of FORTH
        Pattern Recognition Based Programming System - implementation
        Robotics Uses - Androbot

        There are also introductory sessions and sessions on
various standards.  Entry fee is $5.00 for the sessions and
exhibits.  The banquet features Tom Frisna, president of
Androbot, as the speaker (fee is $25.00).

------------------------------

Date: 13 Oct 1983 1441:02-EDT
From: Sylvia Brahm <BRAHM@CMU-CS-C.ARPA>
Subject: IUS Presentation

                 [Reprinted from the CMU-C bboard.]

George Sperling from NYU and Bell Laboratories will give a talk
on Monday, October 17, 3:30 to 5:00 in Wean Hall 5409.

Title will be Image Processing and the Logic of Perception.
This talk is not a unification but merely the temporal juxta-
position of two lines of research.  The logic of perception
invoves using unreliable, ambiguous information to arrive at
a categorical decision.  Critical phenomena are multiple stable
states (in response to the same external stimulus) and path
dependence (hysteresis):  the description is potential theory.
Neural models with local inhibitory interaction are the ante-
cedents of contemporary relaxation methods.  New (and old)
examples are provided from binocular vision and depth perception,
including a polemical demonstration of how the perceptual decision
of 3D structure in a 2D display can be dominated by an irrelevant
brightness cue.

Image processing will deal with the practical problem of squeezing
American Sign Language (ASL) through the telephone network.
Historically, an image (e.g., TV @4MHz) has been valued at more
than 10@+(3) speech tokens (e.g., telephone @3kHz).  With image-
processed ASL, the ratio is shown to be approaching unity.

Movies to illustrate both themes will be shown.  Appointments to
speak with Dr. Sperling can be made by calling x3802.

------------------------------

End of AIList Digest
********************

∂17-Oct-83  0120	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #79
Received: from SRI-AI by SU-AI with TCP/SMTP; 17 Oct 83  01:19:42 PDT
Date: Sunday, October 16, 1983 10:13PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #79
To: AIList@SRI-AI


AIList Digest            Monday, 17 Oct 1983       Volume 1 : Issue 79

Today's Topics:
  AI Societies - Bledsoe Election,
  AI Education - Videotapes & Rutgers Mini-Talks,
  Psychology - Intuition & Conciousness
----------------------------------------------------------------------

Date: Fri 14 Oct 83 08:41:39-CDT
From: Robert L. Causey <Cgs.Causey@UTEXAS-20.ARPA>
Subject: Congratulations Woody!

               [Reprinted from the UTexas-20 bboard.]


Woody Bledsoe has been named president-elect of the American
Association of Artificial Intelligence.  He will become
president in August, 1984.

According to the U.T. press release Woody said, "You can't
replace the human, but you can greatly augment his abilities."
Woody has greatly augmented the computer's abilities. Congratulations!

------------------------------

Date: 12 Oct 83 12:59:24-PDT (Wed)
From: ihnp4!hlexa!pcl @ Ucb-Vax
Subject: AI (and other) videotapes to be produced by AT&T Bell
         Laboratories
Article-I.D.: hlexa.287

[I'm posting this for someone who does not have access to netnews.
Send comments to the address below; electronic mail to me will be
forwarded. - PCL]

AT&T Bell Laboratories is planning to produce a
videotape on artificial intelligence that concentrates
on "knowledge representation" and "search strategies"
in expert systems.  The program will feature a Bell
Labs prototype expert system called ACE.

Interviews of Bell Labs developers will provide the
content.  Technical explanations will be made graphic
with computer generated animation.

The tape will be sold to colleges and industry by
Hayden Book Company as part of a software series.
Other tapes will cover Software Quality, Software
Project Management and Software Design Methodologies.

Your comments are welcome.  Write to W. L. Gaddis,
Senior Producer, Bell Laboratories, 150 John F. Kennedy
Parkway, Room 3L-528, Short Hills, NJ 07078

------------------------------

Date: 16 Oct 83 22:42:42 EDT
From: Sri <Sridharan@RUTGERS.ARPA>
Subject: Mini-talks

Recently two notices were copied from the Rutgers bboard to Ailist.
They listed a number of "talks" by various faculty back to back.
Those who wondered how a talk could be given in 10 minutes and
those who wondered why a talk would be given in 10 minutes may
be glad to know the purpose of the series.  This is the innovative
method that has been designed by the CS graduate students society
for introducing to new graduate students and new faculty members
the research interests of the CS faculty.  Each talk typically outlined
the area of CS and AI of interest to the faculty member, discussed
research opportunities and the background (readings, courses) necessary
for doing research in that area.

I have participated in this mini-talk series for several years and
have found it valuable to myself as a speaker.  To be given about 10 min
to say what I am interested in, does force me distill thoughts and to
say it simply.  The feedback from students is also positive.
Perhaps you will hear some from some of the students too.

------------------------------

Date: 11 Oct 83 2:44:12-PDT (Tue)
From: harpo!utah-cs!shebs @ Ucb-Vax
Subject: Re: the Halting problem.
Article-I.D.: utah-cs.1985

I share your notion (that human ability is limited, and that machines
might actually go beyond man in "consciousness"), but not your confidence.
How do you intend to prove your ideas?  You can't just wait for a fantastic
AI program to come along - you'll end up right back in the Turing Test
muddle.  What *is* consciousness?  How can it be characterized abstractly?
Think in terms of universal psychology - given a being X, is there an
effective procedure (used in the technical sense) to determine whether
that being is conscious?  If so, what is that procedure?

                                        AI is applied philosophy,
                                        stan the l.h.
                                        utah-cs!shebs

ps Re rational or universal psychology: a professor here observed that
it might end up with the status of category theory - mildly interesting
and all true, but basically worthless in practice... Any comments?

------------------------------

Date: 12 Oct 83 11:43:39-PDT (Wed)
From: decvax!cca!milla @ Ucb-Vax
Subject: Re: the Halting problem.
Article-I.D.: cca.5880

Of course self-awareness is real.   The  point  is  that  self-awareness
comes  about  BECAUSE  of  the  illusion  of consciousness.  If you were
capable of only very primitive thought, you would  be  less  self-aware.
The  greater  your  capacity  for complex thought, the more you perceive
that your actions are the result of an active,  thinking  entity.   Man,
because  of  his  capacity  to form a model of the world in his mind, is
able to form a model of himself.  This all makes  sense  from  a  purely
physical  viewpoint;  there  is  no  need  for  a supernatural "soul" to
complement the brain.  Animals appear to have some  self-awareness;  the
quantity  depends  on  their intelligence.  Conceivably, a very advanced
computer system could have a high degree  of  self-awareness.   As  with
consciousness,  it is lack of information -- how the brain works, random
factors, etc. which makes self-awareness  seem  to  be  a  very  special
quality.  In fact, it is a very simple, unremarkable characteristic.

                                                M. Massimilla

------------------------------

Date: 12 Oct 83 7:16:26-PDT (Wed)
From: harpo!eagle!mhuxi!mhuxl!ulysses!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Physics and Intuition
Article-I.D.: ncsu.2367


I intend this to be my final word on the matter.  I intend it to be
brief: as someone said, a bit more tolerance on this group would help.
From Laura we have a wonderful story of the intermeshing of physics and
religion.  Well, I picked molecular physics for its avoidance of any
normal life experiences.  Cosmology and creation are not in that catagory
quite so strongly because religion is an everyday thing and will lead to
biases in cosmological theories.  Clearly there is a continuum from
things which are divorced from everyday experience to those that are
very tightly connected to it.  My point is that most "hard" sciences
are at one end of the continuum while psychology is clearly way over
at the other end, by definition.  It is my position that the rather
big difference between the way one can think about the two ends of the
spectrum suggests that what works well at one end may well be quite
inappropriate at the other.  Or it may work fine.  But there is a burden
of proof that I hand off to the rational psychologists before I will
take them more seriously than I take most psychologists.  I have the same
attitude towards cosmology. I find it patently ludicrous that so many
people push our limited theories so far outside the range of applicability
and expect the extrapolation to be accurate. Such extrapoloation is
an interesting way to understand the failing of the theories, but to
believe that DOES require faith without substantiation.

I dislike being personal, but Laura is trying to make it seem black and
white.  The big bang has hardly been proved. But she seems to be saying
it has.  It is of course not so simple. Current theories and data
seem to be tipping the scales, but the scales move quite slowly and will
no doubt be straightened out by "new" work 30 years hence.

The same is true of my point about technical reasoning.  Clearly no
thought can be entirely divorced from life experiences without 10
years on a mountain-top.  Its not that simple.  That doesn't mean that
there are not definable differences between different ways of thinking
and that some may be more suitable to some fields.  Most psychologists
are quite aware of this problem (I didn't make it up) and as a result
purely experimental psychology has always been "trusted" more than
theorizing without data.  Hard numbers give one some hope that it is
the world, not your relationship with a pet turtle speaking in your
work.

If anyone has anymore to say to me about this send me mail, please.
I suspect this is getting tiresome for most readers. (its getting
tiresome for me...)  If you quote me or use my name, I will always
respond.  This network with its delays is a bad debate forum.  Stick to
ideas in abstration from the proponent of the idea. And please look
for what someone is trying to say before assuming thay they are blathering.
----GaryFostel----

------------------------------

Date: 14 Oct 83 13:43:56 EDT  (Fri)
From: Paul Torek <flink%umcp-cs@CSNet-Relay>
Subject: consciousness and the teleporter

    From Michael Condict   ...!cmcl2!csd1!condict

        This, then, is the reason I would never step into one of those
        teleporters that functions by ripping apart your atoms, then
        reconstructing an exact copy at a distant site.  [...]

In spite of the fact that consciousness (I agree with the growing chorus) is
NOT an illusion, I see nothing wrong with using such a teleporter.  Let's
take the case as presented in the sci-fi story (before Michael Condict rigs
the controls).  A person disappears from (say) Earth and a person appears at
(say) Tau Ceti IV.  The one appearing at Tau Ceti is exactly like the one
who left Earth as far as anyone can tell: she looks the same, acts the same,
says the same sort of things, displays the same sort of emotions.  Note that
I did NOT say she is the SAME person -- although I would warn you not too
conclude too hastily whether she is or not.  In my opinion, *it doesn't
matter* whether she is or not.

To get to the point:  although I agree that consciousness needs something to
exist, there *IS* something there for it -- the person at Tau Ceti.  On
what grounds can anyone believe that the person at Tau Ceti lacks a
consciousness?  That is absurd -- consciousness is a necessary concomitant
of a normal human brain.  Now there IS a question as to whether the
conscious person at Tau Ceti is *you*, and thus as to whether his mind
is *your* mind.  There is a considerable philosophical literature on this
and very similar issues -- see *A Dialogue on Personal Identity and
Immortality* by John Perry, and "Splitting Self-Concern" by Michael B. Green
in *Pacific Philosophical Quarterly*, vol. 62 (1981).

But in my opinion, there is a real question whether you can say whether
the person at Tau Ceti is you or not.  Nor, in my opinion, is that
question really important.  Take the modified case in which Michael Condict
rigs the controls so that you are transported, yet remain also at Earth.
Michael Condict calls the one at Earth the "original", and the one at Tau
Ceti the "copy".  But how do you know it isn't the other way around -- how
do you know you (your consciousness) weren't teleported to Tau Ceti, while
a copy (someone else, with his own consciousness) was produced at Earth?

"Easy -- when I walk out of the transporter room at Earth, I know I'm still
me; I can remember everything I've done and can see that I'm still the same
person."  WRONGO -- the person at Tau Ceti has the same memories, etc.  I
could just as easily say "I'll know I was transported when I walk out of the
transporter room at Tau Ceti and realize that I'm still the same person."

So in fairness, we can't say "You walk out of the transporter room at both
ends, with the original you realizing that something went wrong."  We have
to say "You walk out of the transporter at both ends, with *the one at
Earth* realizing something is wrong."  But wait -- they can't BOTH be you --
or can they?  Maybe neither is you!  Maybe there's a continuous flow of
"souls" through a person's body, with each one (like the "copy" at Tau Ceti
(or is it at Earth)) *seeming* to remember doing the things that that body
did before ...

If you acknowledge that consciousness is rooted in the physical human brain,
rather than some mysterious metaphysical "soul" that can't be seen or
touched or detected in any way at all, you don't have to worry about whether
there's a continuous flow of consciousnesses through your body.  You don't
have to be a dualist to recognize the reality of consciousness; in fact,
physicalism has the advantage that it *supports* the commonsense belief that
you are the same person (consciousness) you were yesterday.

                                --Paul Torek, U of MD, College Park
                                ..umcp-cs!flink

------------------------------

End of AIList Digest
********************

∂20-Oct-83  1541	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #80
Received: from SRI-AI by SU-AI with TCP/SMTP; 20 Oct 83  15:40:51 PDT
Date: Thursday, October 20, 1983 9:23AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #80
To: AIList@SRI-AI


AIList Digest           Thursday, 20 Oct 1983      Volume 1 : Issue 80

Today's Topics:
  Administrivia - Complaints &  Seminar Abstracts,
  Implementations - Parallel Production System,
  Natural Language - Phrasal Analysis & Macaroni,
  Psychology - Awareness,
  Programming Languages - Elegance and Purity,
  Conferences - Reviewers needed for 1984 NCC,
  Fellowships - Texas
----------------------------------------------------------------------

Date: Tue 18 Oct 83 20:33:15-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Complaints

I have received copies of two complaints sent to the author
of a course announcement that I published.  The complaints
alleged that the announcement should not have been put out on
the net.  I have three comments:

First, such complaints should come to me, not to the original
authors.  The author is responsible for the content, but it is
my decision whether or not to distribute the material.  In this
case, I felt that the abstract of a new and unique AI course
was of interest to the academic half of the AIList readership.

Second, there is a possibility that the complainants received
the article in undigested form, and did not know that it was
part of an AIList digest.  If anyone is currently distributing
AIList in this manner, I want to know about it.  Undigested
material is being posted to net.ai and to some bboards, but it
should not be showing up in personal mailboxes.

Third, this course announcement was never formally submitted
to AIList.  I picked the item up from a limited distribution,
and failed to add a "reprinted from" or disclaimer line to
note that fact.  I apologize to Dr. Moore for not getting in
touch with him before sending the item out.

                                        -- Ken Laws

------------------------------

Date: Tue 18 Oct 83 09:01:29-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Seminar Abstracts

It has been suggested to me that seminar abstracts would be more
useful if they contained the home address (or net address, phone
number, etc.) of the speaker.  I have little control over the
content of these messages, but I encourage those who compose them
to include such information.  Your notices will then be of greater
use to the scientific community beyond just those who can attend
the seminars.

                                        -- Ken Laws

------------------------------

Date: Mon 17 Oct 83 15:44:52-EDT
From: Mark D. Lerner <LERNER@COLUMBIA-20.ARPA>
Subject: Parallel production systems.


The parallel production  system interpreter is  running
on the 15 node DADO prototype. We can presently run  up
to 32 productions, with 12 clauses in each  production.
The prototype has been operational since April 1983.

------------------------------

Date: 18 Oct 1983 0711-PDT
From: MEYERS.UCI-20A@Rand-Relay
Subject: phrasal analysis


Recently someone asked why PHRAN was not based on a grammar.
It just so happens ....

I have written a parser which uses many of the ideas of PHRAN
but which organizes the phrasal patterns into several interlocking
grammars, some 'semantic' and some syntactic.

The program is called VOX (Vocabulary Extension System) and attempts
a 'complete' analysis of English text.

I am submitting a paper about the concepts underlying the system
to COLING, the conference on Computational Linguistics.
Whether or not it is accepted, I will make a UCI Technical Report
out of it.

To obtain a copy of the paper, write:

                Amnon Meyers
                AI Project
                Dept. of Computer Science
                University of California,
                Irvine, CA   92717

------------------------------

Date: Wednesday, 19 October 1983 10:48:46 EDT
From: Robert.Frederking@CMU-CS-CAD
Subject: Grammars; Greek; invective

One comment and two meta-comments:

Re: the validity of grammars: almost no one claims that grammatical
        phenomena don't exist (even Schank doesn't go that far).  What the
        argument generally is about is whether one should, as the first step
        in understanding an input, build a grammatical tree, without any (or
        much) information from either semantics or the current
        conversational context.  One side wants to do grammar first, by
        itself, and then the other stuff, whereas the other side wants to try
        to use all available knowledge right from the start.  Of course, there
        are folks taking extreme positions on both sides, and people
        sometimes get a bit carried away in the heat of an argument.

Re: Greek: As a general rule, it would be helpful if people who send in
        messages containing non-English phrases included translations.  I
        cannot judge the validity of the Macaroni argument, since I don't
        completely understand either example.  One might argue that I should
        learn Greek, but I think expecting me to know Maori grammatical
        classes is stretching things a bit.

Re: invective: Even if the reference to Yahweh was meant as a childhood
        opinion which has mellowed with age, I object to statements of the
        form "this same wonderful god... tortured and burned..." etc.
        Perhaps it was a typo.  As we all know, people have tortured and
        burnt other people for all sorts of reasons (including what sort of
        political/economic systems small Asian countries should have), and I
        found the statement offensive.

------------------------------

Date: Wednesday, 19 October 1983 13:23:59 EDT
From: Robert.Frederking@CMU-CS-CAD
Subject: Awareness

        As Paul Torek correctly points out, this is a metaphysical question.
The only differences I have with his note are over the use of some difficult
terms, and the fact that he clearly prefers the "physicalist" notion.  Let
me start by saying that one shouldn't try to prove one side or the other,
since proofs clearly cannot work: awareness isn't subject to proof.  The
evidence consists entirely of internal experiences, without any external
evidence.  (Let me warn everyone that I have not been formally trained in
philosophy, so some of my terms may be non-standard.)  The fact that this
issue isn't subject to proof does not make it trivial, or prevent it from
being a serious question.  One's position on this issue determines, I think,
to a large extent one's view on many other issues, such as whether robots
will eventually have the same legal stature as humans, and whether human life
should have a special value, beyond its information handling abilities, for
instance for euthanasia and abortion questions. (I certainly don't want to
argue about abortion; personally, I think it should be legal, but not treated
as a trivial issue.)

        At this point, my version of several definitions is in order.  This
is because several terms have been confused, due probably to the
metaphysical nature of the problem.  What I call "awareness" is *not*
"self-reference": the ability of some information processing systems (including
people) to discuss and otherwise deal with representations of themselves.
It is also *not* what has been called here "consciousness": the property of
being able to process information in a sophisticated fashion (note that
chemical and physical reactions process information as well).  "Awareness"
is the internal experience which Michael Condict was talking about, and
which a large number of people believe is a real thing.  I have been
told that this definition is "epiphenominal", in that awareness is not the
information processing itself, but is outside the phenomena observed.

        Also, I believe that I understand both points of view; I can argue
either side of the issue.  However, for me to argue that the experience of
"awareness" consists solely of a combination of information processing
capabilities misses the "dualist" point entirely, and would require me to
deny that I "feel" the experience I do.  Many people in science deny that
this experience has any reality separate from the external evidence of
information processing capabilities.  I suspect that one motivation for this
is that, as Paul Torek seems to be saying, this greatly simplifies one's
metaphysics.

        Without trying to prove the "dualist" point of view, let me give an
example of why this view seems, to me, more plausible than the
"physicalist" view.  It is a variation of something Joseph Weizenbaum
suggested.  People are clearly aware, at least they claim to be.  Rocks are
clearly not aware (in the standard Western view).  The problem with saying
that computers will ever be aware in the same way that people are is that
they are merely re-arranged rocks.  A rock sitting in the sun is warm, but
is not aware of its warmth, even though that information is being
communicated to, for instance, the rock it is sitting on.  A robot next to
the rock is also warm, and, due to a skillful re-arrangement of materials,
not only carries that information in its kinetic energy, but even has a
temperature "sensor", and a data structure representing its body
temperature.  But it is no more aware (in the experiential sense) of what is
going on than the rock is, since we, by merely using a different level of
abstraction in thinking about it, can see that the data structure is just a
set of states in some semiconductors inside it.  The human being sitting
next to the robot not only senses the temperature and records it somehow (in
the same sense as the robot does), but experiences it internally, and enjoys
it (I would anyway).  This experiencing is totally undetectable to physical
investigation, even when we (eventually) are able to analyze the data
structures in the brain.

An interesting side-note to this is that in some cultures, rocks, trees,
etc., are believed to experience their existance.  This is, to me, an
entirely acceptable alternate theory, in which the rock and robot would both
feel the warmth (and other physical properties) they possess.

As a final point, when I consider what I am aware of at any given moment, it
seems to include a visual display, an auditory sensation, and various bits
of data from parts of my body (taste, smell, touch, pain, etc.).  There are
many things inside my brain that I am *not* aware of, including the
preprocessing of my vision, and any stored memories not recalled at the
moment.  There is a sharp boundary between those things I am aware of and
those things I am not.  Why should this be?  It isn't just that the high
level processes, whatever they are, have access to only some structures.
They *feel* different from other structures in the brain, whose information
I also have access to, but which I have no feeling of awareness in.  It
would appear that there is some set of processing elements to which my
awareness has access.  This is the old mind-body problem that has plagued
philosophers for centuries.

To deny this qualitative difference would be, for me, silly, as silly as
denying that the physical world really exists.  In any event, whatever stand
you take on this issue is based on personal preferences in metaphysics, and
not on physical proof.

------------------------------

Date: 14 Oct 83  1237 PDT
From: Dick Gabriel <RPG@SU-AI>
Subject: Elegance and Logical Purity

                 [Reprinted from the Prolog Digest.]


In the Lisp world, as you know, there are 2 Lisps that serve as
examples for this discussion: T and Common Lisp. T is based on
Scheme and, as such, it is relatively close to a `pure' Lisp or
even a lambda-calculus-style Lisp. Common Lisp is a large,
`user-convenient' Lisp. What are the relative successes of these
two Lisps ?  T appeals to the few, me included, while Common Lisp
appeals to the many. The larger, user-convenient Lisps provide
programmers with tools that help solve problems, but they don't
dictate the style of the solutions.

Think of it this way: When you go to an auto mechanic and you
see he has a large tool chest with many tools, are you more or
less confident in him than if you see he has a small tool box
with maybe 5 tools ?  Either way our confidence should be based
on the skill of the mechanic, but we expect a skilfull mechanic
with the right tools to be more efficient and possibly more
accurate than the mechanic who has few tools, or who merely has
tools and raw materials for making further tools.

One could take RPLACA as an analog to a user-convenience in this
situation. We do not need RPLACA: it messes up the semantics, and
we can get around it with other, elegant and pure devices. However,
RPLACA serves user convenience by providing an efficient means of
accomplishing an end.  In supplying RPLACA, I, the implementer,
have thought through what the user is trying to do.  No user would
appreciate it if I suggested that I knew better than he what he is
doing and to propose he replace all list structure that he might
wish to use with side-effect with closures and to then hope for
a smarter compiler someday.

I think it shows more contempt of users' abilities to dictate a
solution to him in the name of `elegance and logical purity' than
for me to think through what he wants for him.

I am also hesitant to foist on people systems or languages that
are so elegant and pure that I have trouble explaining it to users
because I am subject to being ``muddled about them myself.''

Maybe it is stupid to continue down the Lisp path, but Lisp is the
second oldest lanuage (to FORTRAN), and people clamor to use it.
Recall what Joel Moses said when comparing APL with Lisp.

  APL is perfect; it is like a diamond. But like a diamond
  you cannot add anything to it to make it more perfect, nor
  can you add anything to it and have it remain a diamond.
  Lisp, on the other hand, is like a ball of mud. You can add
  more mud to it, and it is still a ball of mud.

I think user convenience is like mud.

-rpg-

------------------------------

Date: Tuesday, 18 October 1983 09:32:25 EDT
From: Joseph.Ginder at CMU-CS-SPICE
Subject: Common Lisp Motivation

                 [Reprinted from the Prolog Digest.]


Being part of the Common Lisp effort, I would like to express an
opinion about the reasons for the inclusion of so many "impurities" in
Common Lisp that differs from that expressed by Fernando Pereira in
the last Prolog Digest.  I believe the reason for including much of
what is now Common Lisp in the Common Lisp specification was an effort
to provide common solutions to common problems; this is as opposed to
making concessions to language limitations or people's (in)ability to
write smart compilers.  In particular, the reference to optimizing
"inefficient copying into efficient replacement" does not seem a
legitimate compiler optimization (in the general sense) -- this
clearly changes program semantics.  (In the absence of side effects,
this would not be a problem, but note that some side effect is
required to do IO.)  For a good statement of the goals of the Common
Lisp effort, see Guy Steele's paper in the 1982 Lisp and Functional
Programming Conference Proceedings.

Let me hasten to add that I agree with Pereira's concern that
expediency not be promoted to principle.  It is for this very reason
that language features such as flavors and the loop construct were not
included in the Common Lisp specification -- we determined not to
standardize until concensus could be reached that a feature was both
widely accepted and believed to be a fairly good solution to a common
problem.  The goal is not to stifle experimentation, but to promote
good solutions that have been found through previous experience.  In
no sense do I believe anyone regards the current Common Lisp language
as the Final Word on Lisp.

Also, I have never interpreted Moses' diamond vs. mud analogy to have
anything to do with authoritarianism, only aesthetics.  Do others ?

-- Joe Ginder

------------------------------

Date: 17 Oct 1983 07:38:44-PST
From: jmiller.ct@Rand-Relay
Subject: Reviewers needed for 1984 NCC

The Program Committee for the 1984 National Computer Conference, which will be
held in Las Vegas next July 9-12, is about to begin reviewing submitted
papers, and we are in need of qualified people who would be willing to serve
as reviewers.  The papers would be sent to you in the next couple of weeks;
the reviews would have to be returned by the end of December.

Since NCC is sponsored by non-profit computer societies and is run largely by
volunteers, it is not possible to compensate reviewers for the time and
effort they contribute.  However, to provide some acknowledgement of your
efforts, your name will appear in the conference proceedings and, if you
wish to attend NCC, we can provide you with advanced registration forms in
hotels close to the convention center.  We are also trying to arrange
simplified conference registration for reviewers.

As the chair of the artificial intelligence track, I am primarily concerned
with finding people who would be willing to review papers on AI and/or
human-computer interaction.  However, I will forward names of volunteers in
other areas to the appropriate chairs.  If you would like to volunteer,
please send me your:

        - name,
        - mailing address,
        - telephone number,
        - arpanet or csnet address (if any), and
        - subjects that you are qualified to review (it would be ideal if
          you could use the ACM categorization scheme)

Either arpanet/csnet mail or US mail to my address below would be fine.
Thanks for your help.

James Miller
Computer * Thought Corporation
1721 West Plano Parkway
Plano, Texas 75075
JMILLER.CT @ RAND-RELAY

------------------------------

Date: Tue 11 Oct 83 10:44:08-CDT
From: Gordon Novak Jr. <CS.NOVAK@UTEXAS-20.ARPA>
Subject: $1K/mo Fellowships at Texas

The Department of Computer Sciences at the University of Texas at Austin
is  initiating a Doctoral Fellows program, with fellowships available in
Spring 1984 and thereafter.  Recipients must be admitted  to  the  Ph.D.
program;  November  1  is  the  applications  deadline  for Spring 1984.
Applicants must have a B.A. or B.S.  in Computer Science, or equivalent,
a total GRE (combined verbal and quantitative) of at least 1400,  and  a
GPA  of  at  least  3.5  .    Doctoral  Fellows  will  serve as Teaching
Assistants for two semesters, then will be given a fellowship  (with  no
TA  duties)  for  one additional year.  The stipend will be $1000/month.
Twenty fellowships per year will be available.

The Computer Sciences Department at the University of Texas is ranked in
the top ten departments by the Jones-Lindzey report.  Austin is  blessed
with  an  excellent  climate  and  unexcelled  cultural and recreational
opportunities.

For details, contact Dr. Jim Bitner (CS.BITNER@UTEXAS-20),  phone  (512)
471-4353,  or  write to Computer Science Department, University of Texas
at Austin, Austin, TX 78712.

------------------------------

End of AIList Digest
********************

∂24-Oct-83  1255	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #81
Received: from SRI-AI by SU-AI with TCP/SMTP; 24 Oct 83  12:54:52 PDT
Date: Monday, October 24, 1983 8:58AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #81
To: AIList@SRI-AI


AIList Digest            Monday, 24 Oct 1983       Volume 1 : Issue 81

Today's Topics:
  Lisp Machines & Fuzzy Logic - Request,
  Rational Psychology,
  Reports - AI and Robotics Overviews & Report Sources,
  Bibliography - Parallelism and Conciousness,
  Learning - Machine Learning Course
----------------------------------------------------------------------

Date: Sun, 23 Oct 83 16:00:07 EDT
From: Ferd Brundick (LTTB) <fsbrn@brl-voc>
Subject: info on Lisp Machines

We are about to embark on an ambitious AI project in which we hope
to develop an Expert System.  The system will be written in Lisp
(or possibly Prolog) and will employ fuzzy logic and production
rules.  In my role as equipment procurer and novice Lisp programmer,
I would like any information regarding Lisp machines, eg, what is
available, how do the various machines compare, etc.  If this topic
has been discussed before I would appreciate pointers to the info.
On the software side, any discussions regarding fuzzy systems would
be welcomed.  Thanks.
                                        dsw, fferd
                                        <fsbrn@brl-voc>

------------------------------

Date: 26 Sep 83 10:01:56-PDT (Mon)
From: ihnp4!drux3!drufl!samir @ Ucb-Vax
Subject: Rational Psychology
Article-I.D.: drufl.670

Norm,

        Let me elaborate. Psychology, or logic of mind, involves BOTH
rational and emotional processes. To consider one exclusively defeats
the purpose of understanding.

        I have not read the article we are talking about so I cannot
comment on that article, but an example of what I consider a "Rational
Psychology" theory is "Personal Construct Theory" by Kelly. It is an
attractive theory but, in my opinion, it falls far short of describing
"logic of mind" as it fails to integrate emotional aspects.

        I consider learning-concept formation-creativity to have BOTH
rational and emotional attributes, hence it would be better if we
studied them as such.

        I may be creating a dichotomy where there is none. (Rational vs.
Emotional). I want to point you to an interesting book "Metaphors we
live by" (I forget the names of Authors) which in addition to discussing
many other ai-related (without mentioning ai) concepts discusses the
question of Objective vs. Subjective, which is similar to what we are
talking here, Rational vs. Emotional.

        Thanks.

                                Samir Shah
                                AT&T Information Systems, Denver.
                                drufl!samir

------------------------------

Date: Fri 21 Oct 83 11:31:59-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Overview Reports

I previously mentioned a NASA report described in IEEE Spectrum.
I now have further information from NTIS.  The one mentioned
was the last of the following:

  An Overview of Artificial Intelligence and Robotics:
  Volume II - Robotics, NBSIR-82-2479, March 1982
      PB83-217547          Price $13.00

  An Overview of Expert Systems, NBSIR-82-2505, May 1982
  (Revised October 1982)
      PB83-217562          Price $10.00

  An Overview of Computer Vision, NBSIR-822582 (or possibly
  listed as NBSIR-832582), September 1982
      PB83-217554          Price $16.00

  An Overview of Computer-Based Natural Language Processing,
  NASA-TM-85635  NBSIR-832687  N83-24193   Price $10.00

  An Overview of Artificial Intelligence and Robotics;
  Volume I - Artificial Intelligence, June 1983
  NASA-TM-85836            Price $10.00


The ordering address is

  United States Department of Commerce
  National Technical Information Service
  5285 Port Royal Road
  Springfield, VA  22161


                                        -- Ken Laws

------------------------------

Date: Fri 21 Oct 83 11:38:42-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Report Sources

The NTIS literature I have also lists some other useful sources:

  University Microfilms, Inc.
  300 N. Zeeb Road
  Ann Arbor, MI  48106

  National Translation Center
  SLA Translation Center, The John Crerar Library
  35 West 33rd Street
  Chicago, IL  60616

  Library of Congress,
    Photoduplicating Service
  Washington, D.C.  20540

  American Institute of Aeronautics & Astronautics
  Technical Information Service
  555 West 57th Street, 12th Floor
  New York, NY  10019

  National Bureau of Standards
  Gaithersburg, MD  20234

  U.S. Dept. of Energy,
    Div. of Technical Information
  P.O. Box 62
  Oak Ridge, TN  37830

  NASA Scientific and Technical Facility
  P.O. Box 8757
  Balt/Wash International Airport
  Baltimore, MD  21240


                                        -- Ken Laws

------------------------------

Date: Sun, 23 Oct 83 12:21:54 PDT
From: Rik Verstraete <rik@UCLA-CS>
Subject: Bibliography (parallelism and conciousness)

David Rogers asked me if I could send him some of my ``favorite''
readings on the subject ``parallelism and conciousness.'' I searched
through my list, and came up with several references which I think
might be interesting to everybody.  Not all of them are directly
related to ``parallelism and conciousness,'' but nevertheless...

Albus, J.S., Brains, Behavior, & Robotics, Byte Publications Inc.
(1981).

Arbib, M.A., Brains, Machines and Mathematics, McGraw-Hill Book
Company, New York (1964).

Arbib, M.A., The Metaphorical Brain, An Introduction to Cybernetics as
Artificial Intelligence and Brain Theory, John Wiley & Sons, Inc.
(1972).

Arbib, M.A., "Automata Theory and Neural Models," Proceedings of the
1974 Conference on Biologically Motivated Automata Theory, pp. 13-18
(June 19-21, 1974).

Arbib, M.A., "A View of Brain Theory," in Selforganizing Systems, The
Emergence of Order, ed. F.E. Yates, Plenum Press, New York (1981).

Arbib, M.A., "Modelling Neural Mechanisms of Visuomotors Coordination
in Frogs and Toad," in Competition and Cooperation in Neural Nets, ed.
Amari, S., and M.A. Arbib, Springer-Verlag, Berlin (1982).

Barto, A.G. and R.S. Sutton, "Landmark Learning: An Illustration of
Associative Search," Biological Cybernetics Vol. 42(1) pp. 1-8
(November 1981).

Barto, A.G., R.S. Sutton, and C.W. Anderson, "Neuron-Like Adaptive
Elements that can Solve Difficult Learning Control Problems," Coins
Technical Report 82-20, Computer and Information Science Department,
University of Massachusetts, Amherst, MA (1982).

Begley, S., J. Carey, and R. Sawhill, "How the Brain Works," Newsweek,
(February 7, 1983).

Davis, L.S. and A. Rosenfeld, "Cooperating Processes for Low-Level
Vision: A Survey," Aritificial Intelligence Vol. 17 pp.  245-263
(1981).

Doyle, J., "The Foundations of Psychology," CMU-CS-82-149, Department
of Computer Science, Carnegie-Mellon University, Pittsburgh, PA
(February 18, 1982).

Feldman, J.A., "Memory and Change in Connection Networks," Technical
Report 96, Computer Science Department, University of Rochester,
Rochester, NY (December 1981).

Feldman, J.A., "Four Frames Suffice: A Provisionary Model of Vision and
Space," Technical Report 99, Computer Science Department, University of
Rochester, Rochester, NY (September 1982).

Grossberg, S., "Adaptive Resonance in Development, Perception and
Cognition," SIAM-AMS Proceedings Vol. 13 pp. 107-156 (1981).

Harth, E., "On the Spontaneous Emergence of Neuronal Schemata," pp.
286-294 in Competition and Cooperation in Neural Nets, ed. Amari, S.,
and M.A. Arbib, Springer-Verlag, Berlin (1982).

Hayes-Roth, B., "Implications of Human Pattern Processing for the
Design of Artificial Knowledge Systems," pp. 333-346 in
Pattern-Directed Inference Systems, ed. Waterman, D.A., and F. Hayes-
Roth, Academic Press, New York (1978).

Hofstadter, D.R., Godel, Escher, Bach: An Eternal Golden Braid, Vintage
Books,, New York (1979).

Hofstadter, D.R. and D.C. Dennett, The Mind's I, Basic Books, Inc., New
York (1981).

Holland, J.H., Adaption in Natural and Artificial Systems, The
University of Michigan Press, Ann Arbor (1975).

Holland, J.H. and J.S. Reitman, "Cognitive Systems Based on Adaptive
Algorithms," pp. 313-329 in Pattern-Directed Inference Systems, ed.
Waterman, D.A., and F. Hayes-Roth, Academic Press, New York (1978).

Kauffman, S., "Behaviour of Randomly Constructed Genetic Nets: Binary
Element Nets," pp. 18-37 in Towards a Theoretical Biology, Vol 3:
Drafts, ed.  C.H. Waddington,Edinburgh University Press (1970).

Kauffman, S., "Behaviour of Randomly Constructed Genetic Nets:
Continuous Element Nets," pp. 38-46 in Towards a Theoretical Biology,
Vol 3: Drafts, ed. C.H. Waddington, Edinburgh University Press (1970).

Kent, E.W., The Brains of Men and Machines, Byte/McGraw-Hill,
Peterborough, NH (1981).

Klopf, A.H., The Hedonistic Neuron, Hemisphere Publishing Corporation,
Washington (1982).

Kohonen, T., "A Simple Paradigm for the Self-Organized Formation of
Structured Feature Maps," in Competition and Cooperation in Neural
Nets, ed.  Amari, S., and M.A. Arbib, Springer-Verlag, Berlin (1982).

Krueger, M.W., Artificial Reality, Addison-Wesley Publishing Company
(1983).

McCulloch, W.S. and W. Pitts, "A Logical Calculus of the Ideas Immanent
in Nervous Activity," Bulletin of Mathematical Biophysics Vol. 5(4)
pp.  115-133 (December 1943).

Michalski, R.S., J.G. Carbonell, and T.M.  Mitchell, Machine Learning,
An Artificial Intelligence Approach, Tioga Publishing Co, Palo Alto, CA
(1983).

Michie, D., "High-Road and Low-Road Programs," AI Magazine, pp. 21-22
(Winter 1981-1982).

Narendra, K.S. and M.A.L. Thathachar, "Learning Automata - A Survey,"
IEEE Transactions on Systems, Man, and Cybernetics Vol. SMC-4(4) pp.
323-334 (July 1974).

Nilsson, N.J., Learning Machines: Foundations of Trainable Pattern-
Classifying Systems, McGraw-Hill, New-York (1965).

Palm, G., Neural Assemblies, Springer-Verlag (1982).

Pearl, J., "On the Discovery and Generation of Certain Heuristics," The
UCLA Computer Science Department Quarterly Vol. 10(2) pp. 121-132
(Spring 1982).

Pistorello, A., C. Romoli, and S. Crespi-Reghizzi, "Threshold Nets and
Cell-Assemblies," Information and Control Vol. 49(3) pp. 239-264 (June
1981).

Truxal, C., "Watching the Brain at Work," IEEE Spectrum Vol. 20(3) pp.
52-57 (March 1983).

Veelenturf, L.P.J., "An Automata-Theoretical Approach to Developing
Learning Neural Networks," Cybernetics and Systems Vol. 12(1-2) pp.
179-202 (January-June 1981).

------------------------------

Date: 20 October 1983 1331-EDT
From: Jaime Carbonell at CMU-CS-A
Subject: Machine Learning Course

                 [Reprinted from the CMU-AI bboard.]

[I pass this on as a list of topics and people in machine learning.  -- KIL]


The schedule for the remaining classes in the Machine Learning
course (WeH 4509, tu & thu at 10:30) is:

Oct 25 - "Strategy Acquisition" -- Pat Langley
Oct 27 - "Learning by Chunking & Macro Structures" -- Paul Rosenbloom
Nov 1  - "Learning in Automatic Programming" -- Elaine Kant
Nov 3  - "Language Acquisition I" -- John Anderson
Nov 8  - "Discovery from Empirical Observations" -- Herb Simon
Nov 10 - "Language Acquisition II" -- John Anderson or Brian McWhinney
Nov 15 - "Algorithm Discovery" -- Elaine Kant or Allen Newell
Nov 17 - "Learning from Advice and Instruction" -- Jaime Carbonell
Nov 22 - "Conceptual Clustering" -- Pat Langley
Nov 29 - "Learning to Learn" -- Pat Langley
Dec 1  - "Genetic Learning Methods" -- Stephen Smith
Dec 6  - "Why Perceptrons Failed" -- Geoff Hinton
Dec 8  - "Discovering Regularities in the Environment" -- Geoff Hinton
Dec 13 - "Trainable Stochastic Grammars" -- Peter Brown

------------------------------

End of AIList Digest
********************

∂26-Oct-83  1614	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #82
Received: from SRI-AI by SU-AI with TCP/SMTP; 26 Oct 83  16:11:25 PDT
Date: Wednesday, October 26, 1983 10:31AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #82
To: AIList@SRI-AI


AIList Digest           Wednesday, 26 Oct 1983     Volume 1 : Issue 82

Today's Topics:
  AI Hardware - Dolphin-Users Distribution List,
  AI Software - Inference Engine Toolkit for PCs,
  Metaphysics - Parallelism and Conciousness,
  Machine Learning - Readings,
  Seminars - CSLI & Speech Understanding & Term Rewriting & SYDPOL Languages
----------------------------------------------------------------------

Date: Tue 25 Oct 83 11:56:44-PDT
From: Christopher Schmidt <SCHMIDT@SUMEX-AIM.ARPA>
Subject: Dolphin-Users distribution list

        If there are AIList readers who would like to discuss lisp machines
at a more detailed level than the credo of AIList calls for, let me alert them
to the existence of the Dolphin-Users@SUMEX distribution list.  This list was
formed over a year ago to discuss problems with Xerox D machines, but it has
had very little traffic, and I'm sure few people would mind if other lisp
machines were discussed.  If you would like your name added, please send a note
to Dolphin-Requests@SUMEX.  If you would like to contribute or ask a question
about some lisp machine or problem, please do!  --Christopher

------------------------------

Date: Wed 26 Oct 83 10:26:47-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Inference Engine Toolkit for PCs

I have been requested to pass on some product availability data to AIList.
I think I can do so without violating Arpanet regulations.  I am
uncomfortable about such notices, however, and will generally require
that they pass through at least one "commercially disinterested" person
before being published in AIList.  I will perform this screening only
in exceptional cases.

The product is a document on a backward-chaining inference engine
toolkit, including source code in FORTH.  The inference engine uses
a production language syntax which allows semantic inference and
access to analytical subroutines written in FORTH.  Source code is
included for a forward-chaining tool, but the strategy is not
implemented in the inference routines.  The code is available on
disks formatted for a variety of personal computers.  For further
details, contact Jack Park, Helion, Inc., Box 445, Brownsville, CA
95919, (916) 675-2478.  The toolkit is also available from Mountain
View Press, Box 4656, Mountain View, CA  94040.

                                        -- Ken Laws

------------------------------

Date: Tuesday, 25 October 1983, 10:28-EST
From: John Batali <Batali at MIT-OZ>
Subject: Parallelism and Conciousness


I'm interested in the reasons for the pairing of these two ideas.  Does
anyone think that parallelism and consciousness necessarily have anything
to do with one another?

------------------------------

Date: Tue 25 Oct 83 12:22:45-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Parallelism and Consciousness

    I cannot say that "parallelism and consciousness are necessarily
related", for one can (at least) simulate a parallel process on a
sequential machine. However, just because one has the ability to
represent a process in a certain form does not guarantee that this
is the most natural form to represent it in; e.g., FORTRAN and LISP
are theoretically as powerful, but who wants to program an expert
system in FORTRAN?

    Top-down programming of knowledge is not (in my opinion) an
easy candidate for parallelism; one can hope for large
speed-ups of execution speed, but rarely are the algorithms
able to naturally utilize the ability of parallel systems to
support interacting non-deterministic processes. (I'm sure
I'll hear from some parallel logic programmer on that one).

    My candidate for developing parallelism and consciousness involves
incorporating the non-determinism at the heart of the system, by
using a large number of subcognitive processes operating in
parallel; this is essentially Hofstadter's concept of consciousness
being an epiphenomenon of the interacting structures, and not being
explicitly programmed.

    The reason for the parallelism is twofold. First, I would
assume that a system of interacting subcognitive structures would
have a significant amount of "random" effort, while the more
condensed logic based system would be more computationally more
efficient. Thus, the parallelism is partially used to offset the
added cost of the more fluid, random motion of the interacting
processes.

    Second, the interacting processes would allow a natural interplay
between events based on time; for example, infinite loops are
easily avoided through having a process interrupt if too much
time is taken. The blackboard architecture is also naturally
represented in parallel, as a number of coordinating processes
scribble on a shared data structure. Actually, in my mind, the
blackboard structure has not been developed fully; I have the
image of people at a party in my mind, with groups forming,
ideas developed, groups breaking up and reforming. Many blackboards
are active at once, and as interest is forgotten, they dissolve,
then reform around other topics.

    Notice that this representation of a party has no simple
sequential representation, nor would a simple top level rule
base be able to model the range of activities the party can evolve to.
How does "the party" decide what beer to buy, or how long to stay intact,
or whether it will be fun or not? If I were to model a party, I'd
say a parallel system of subcognitive structures would be almost
the only natural way.

    As a final note, I find the vision of consciousness being
analogous to people at a party simple and humorous. And somehow,
I've always found God to clothe most truths in humor... am I the only
one who has laughed at the beautiful simplicity of E=MC↑2?

David

------------------------------

Date:     22 Oct 83 19:27:33 EDT  (Sat)
From: Paul Torek <flink%umcp-cs@CSNet-Relay>
Subject:  re: awareness

            [Submitted by Robert.Frederkind@CMU-CS-SAD.]

[Robert:]

I think you've misunderstood my position.  I don't deny the existence of
awareness (which I called, following Michael Condict, consciousness).  It's
just that I don't see why you or anyone else don't accept that the physical
object known as your brain is all that is necessary for your awareness.

I also think you have illegitimately assumed that all physicalists must be
functionalists.  A functionalist is someone who believes that the mind
consists in the information-processing features of the brain, and that it
doesn't matter what "hardware" is used, as long as the "software" is the
same there is the same awareness.  On the other hand, one can be a
physicalist and still think that the hardware matters too -- that awareness
depends on the actual chemical properties of the brain, and not just the
type of "program" the brain instantiates.

You say that a robot is not aware because its information-storage system
amounts to *just* the states of certain bits of silicon.  Functionalists
will object to your statement, I think, especially the word "just" (meaning
"merely").  I think the only reason one throws the word "just" into the
statement is because one already believes that the robot is unaware.  That
begs the question completely.

Suppose you have a "soul", which is a wispy ghostlike thing inside your body
but undetectable.  And this "soul" is made of "soul-stuff", let's call it.
Suppose we've decided that this "soul" is what explains your
intelligent-appearing and seemingly aware behavior.  But then someone comes
along and says, "Nonsense, Robert is no more aware than a rock is, since we,
by using a different level of abstraction in thinking about it, can see that
his data-structure is *merely* the states of certain soul-stuff inside him."
What makes that statement any less cogent than yours concerning the robot?

So, I don't think dualism can provide any advantages in explaining why
experiences have a certain "feel" to them.  And I don't see any problems
with the idea that the "feel" of an experience is caused by, or is identical
with, or is one aspect of, (I haven't decided which yet), certain brain
processes.
                                --Paul Torek, umcp-cs!flink

------------------------------

Date: Monday, 24 October 1983 15:31:13 EDT
From: Robert.Frederking@CMU-CS-CAD
Subject: Re: awareness


        Sorry about not noticing the functionalist/physicalist
distinction.  Most of the people that I've discussed this with were either
functionalists or dualists.

        The physicalist position doesn't bother me nearly as much as the
functionalist one.  The question seems to be whether awareness is a function
of physical properties, or something that just happens to be associated with
human brains -- that is, whether it's a necessary property of the physical
structure of functioning brains.  For example, the idea that your "soul" is
"inside your body" is a little strange to me -- I tend to think of it as
being similar to the idea of hyperdimensional mathematics, so that a person's
"soul" might exist outside the dimensions we can sense, but communicate with
their body.  I think that physicalism is a reasonable hypothesis, but the
differences are not experimentally verifiable, and dualism seems more
reasonable to me.

        As far as the functionalist counter-argument to mine would go, the
way you phrased it implies that I think that the "soul" explains human
behavior.  Actually, I think that *all* human behavior can be modeled by
physical systems like robots.  I suspect that we'll find physical correlates
to all the information processing behavior we see.  The thing I am
describing is the internal experience.  A functionalist certainly could make
the counter-argument, but the thing that I believe to be important in this
discussion is exactly the question of whether the "soul" is intrinsically
part of the body, or whether it's made of "soul-stuff", not necessarily
"located" in the body (if "souls" have locations), but communicating with
it.  As I implied in my previous post, I am concerned with the eventual
legal and ethical implications of taking a functionalist point of view.

        So I guess I'm saying that I prefer either physicalism or dualism to
functionalism, due to the side-effects that will occur eventually, and that
to me dualism appears the most intuitively correct, although I don't think
anyone can prove any of the positions.

------------------------------

Date: 24 Oct 1983 13:58:10-EDT
From: Paul.Rosenbloom at CMU-CS-H
Subject: ML Readings

                 [Reprinted from the CMU-AI bboard.]

The suggested readings for this Thursday's meeting of the machine learning
course -- on chunking and macro-operators -- are: "Learning and executing
generalized robot plans" by Fikes, Hart, and Nilsson (AIJ 1972); "Knowledge
compilation: The general learning mechanism" by Anderson (proceedings of the
1983 machine learning workshop); and "The chunking of goal hierarchies: A
generalized model of practice" by Rosenbloom and Newell (also in the
proceedings of the 1983 machine learning workshop).  These readings are now
(or will be shortly) on reserve in the E&S library.

------------------------------

Date: Mon 24 Oct 83 20:09:30-PDT
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: CS Colloq 10/25 Terry Winograd & Brian Smith

[Reprinted from the SU-Score bboard.  Sorry this one is late,
but it still may be valuable as the first mention of CSLI on
AIList. -- KIL]


CS Colloquium, Tuesday, Octobe 25, 4:15 Terman Auditorium
Terry Winograd (CSD) and Brian Smith (Xerox PARC)

Introducing the Center for the Study of Language and Information

This summer a new institute was created at Stanford, made up of
researchers from Stanford, SRI, Xerox, and Fairchild working in the study
of languages, both natural and formal.  Participants from Stanford will
include faculty, students and research staff from the departments of
Computer Science, Linguistics, and Philosophy.  We will briefly describe
the structure of the institute, and will present at some length the
intellectual vision on which it is based and the content of the current
research projects.

------------------------------

Date: 23 Oct 1983 22:14:30-EDT
From: Gary.Bradshaw at CMU-RI-ISL1
Subject: Dissertation defense

                 [Reprinted from the CMU-AI bboard.]

I am giving my dissertation defense on Monday, October 31 at 8:30 a.m.
in Baker Hall 336b.  Committee members: Herbert Simon (chair),
Raj Reddy, John Anderson, and Brian MacWhinney.  The following is the
talk abstract:


                     LEARNING TO UNDERSTAND SPEECH SOUNDS:
                              A THEORY AND MODEL

                               Gary L. Bradshaw

Current  theories  of  speech  perception  postulate  a  set  of innate
feature detectors that derive a phonemic analysis of speech, even though a
large number of empirical tests are inconsistent with the feature detector
hypothesis.    I will  briefly describe feature detector theory and the
evidence against it, and will then present an alternative learning theory of
speech  perception.    The talk  will  conclude  with  a  description  of a
computer implementation of the theory, along with learning and performance
data for the system.

------------------------------

Date: 25 Oct 1983 1510-PDT
From: GOGUEN at SRI-CSL
Subject: rewrite rule seminar

TENTATIVE PROGRAM FOR TERM REWRITING SEMINAR
--------------------------------------------

FIRST TALK:
27 October 1983, Thursday, 3:30-5pm, Jean-Pierre Jouannaud,
   Room EL381, SRI
This first talk will be an overview: basic mechanisms, solved & unsolved
problems, and main applications of term rewriting systems.

We will survey the literature, also indicating the most important results
and open problems, for the following topics:
  1. definition of rewriting
  2. termination
  3. For non-terminating rewritings: Church-Rosser properties, Sound computing
     strategies, Optimal computing strategies
  4. For terminating rewritings: Church-Rosser properties, completion
     algorithm, inductive completion algorithm, narrowing process
Three kind of term rewriting will be discussed: Term Rewriting
Systems (TRS), Equational Term Rewriting Systems (ETRS) and Conditional Term
Rewriting Systems (CTRS).

--------------------------------------------------

Succeeding talks should be more technical.  The accompanying bibliographical
citations suggest important and readible references for each topic.  Do we
have any volunteers for presenting these topics?

---------------------------------------------------

Second talk, details of terminating TRS:
  Knuth and Bendix; Dershowitz TCS; Jouannaud; Lescanne & Reinig,
  Formalization of Programming Concepts, Garmisch; Huet JACM; Huet JCSS; Huet
  & Hullot JACM; Fay CADE 78; Hullot CADE 80; Goguen CADE 80.

Third and fourth talk, details of terminating ETRS:
 Jouannaud & Munoz draft; Huet JACM; Lankford & Ballantine draft; Peterson &
  Stickel JACM; Jouannaud & Kirchner POPL; Kirchner draft; Jouannaud, Kirchner
  & Kirchner ICALP.

Fifth talk, details of turning the Knuth-Bendix completion procedure into a
complete refutational procedure for first order built in theories, with
applications to PROLOG:
  Hsiang thesis; Hsiang & Dershowitz ICALP; Dershowitz draft "Computing
  with TRW".

Sixth and seventh talks, non-terminating TRS and CTRS:
  O'Donnel LNCS; Huet & Levy draft; Pletat, Engels and Ehrich draft; Bergstra
  & Klop draft.

Eighth talk, terminating CTRS:
  Remy thesis.

(More time may be needed for some talks.)

------------------------------

Date: 25 Oct 83  1407 PDT
From: Terry Winograd <TW@SU-AI>
Subject: next week's talkware - Nov 1 TUESDAY - K. Nygaard

                [Reprinted from the SU-SCORE bboard.]


Date: Tuesday, Nov 1 *** NOTE ONE-TIME CHANGE OF DATE AND TIME ***
Speaker: Kristen Nygaard (University of Oslo and Norwegian Computing Center)
Topic: SYDPOL: System Development and Profession-Oriented Languages
Time: 1:15-2:30
Place: Poly Sci Bldg. Room 268. ***NOTE NONSTANDARD PLACE***


A new project involving several universities and research centers in three
Scandinavian countries has been establihed to create new methods of system
development, using profession-oriented languages.  They will design
computer-based systems that will operate in work associated with
professions (the initial application is in hospitals), focussing on the
problem of facilitating cooperative work among professionals.  One aspect
of the research is the development of formal languages for describing the
domains of interest and providing an interlingua for the systems and for
the people who use them.  This talk will focus on the language-design
research, its goals and methods.

------------------------------

End of AIList Digest
********************

∂27-Oct-83  1859	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #83
Received: from SRI-AI by SU-AI with TCP/SMTP; 27 Oct 83  18:58:30 PDT
Date: Thursday, October 27, 1983 2:53PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #83
To: AIList@SRI-AI


AIList Digest            Friday, 28 Oct 1983       Volume 1 : Issue 83

Today's Topics:
  AI Jargon - Definitions,
  Unification - Request,
  Rational Psychology - Definition,
  Conferences - Computers and the Law & FORTH Proceedings,
  Seminars - AI at ADL & Theorem Proving
----------------------------------------------------------------------

Date: 26 October 1983 1048-PDT (Wednesday)
From: abbott at AEROSPACE (Russ Abbott)
Subject: Definitions of AI Terms

The IEEE is in the process of preparing a dictionary of computer terms.
Included will be AI-related terms.  Does anyone know of existing sets of
definitions?

In future messages I expect to circulate draft definitions for comment.

------------------------------

Date: 26 Oct 83 16:46:09 EDT  (Wed)
From: decvax!duke!unc!bts@Berkeley
Subject: Unification

Ken,
        I posted this to USENET a week ago.  Since it hasn't shown
up in the AIList, I suspect that it didn't make it to SRI [...].

[Correct, we must have a faulty connection. -- KIL]

        Bruce

P.S. As an astute USENET reader pointed out, I perhaps should have said
that a unifier makes the terms "syntactically equal".  I thought it
was clear from context.
=====================================================================

  From: unc!bts (Bruce Smith)
  Newsgroups: net.ai
  Title: Unification Query
  Article-I.D.: unc.6030
  Posted: Wed Oct 19 01:23:46 1983
  Received: Wed Oct 19 01:23:46 1983

       I'm interested in anything new on unification algo-
  rithms.  In case some readers don't know what I'm talking
  about, I'll give a short description of the problem and some
  references I know of.  Experts-- the ones I'm really
  interested in reaching-- may skip to the last paragraph.
       Given a set of terms (in some language) containing
  variables, the unification problem is to find a 'unifier',
  that is, a substitution for the variables in those terms
  which would make the terms equal.  Moreover, the unifier
  should be a 'most general unifier', that is, any other unif-
  iers should be extensions of it.  Resolution theorem-provers
  and logic programming languages like Prolog depend on
  unification-- though the Prolog implementations I'm familiar
  with "cheat". (See Clocksin and Mellish's "Programming in
  Prolog", p. 219.)
       Unification seems to be a very active topic.  The paper
  "A short survey on the state of the art in matching and
  unification problems", by Raulefs, Siekmann, Szabo and
  Unvericht, in the May 1979 issue of the SIGSAM Bulletin,
  contains a bibliography of over 90 articles.  And, "An effi-
  cient unification algorithm", by Martelli and Montanari, in
  the April 1982 ACM Transactions on Programming Languages and
  Systems, gives a (very readable) discussion of the effi-
  ciency of various unification algorithms.  A programming
  language has even been based on unification: "Uniform-- A
  language based on unification which unifies (much of) Lisp,
  Prolog and Act1" by Kahn in IJCAI-81.
       So, does anyone out there in network-land have a unifi-
  cation bibliography more recent that 1979?  If it's on-line,
  would you please post it to USENET's net.ai?  If not, where
  can we get a copy?

       Bruce Smith, UNC-Chapel Hill
       decvax!duke!unc!bts   (USENET)
       bts.unc@udel-relay (other NETworks)

------------------------------

Date: Wednesday, 26-Oct-83  18:42:21-GMT
From: RICHARD HPS (on ERCC DEC-10) <okeefe.r.a.@edxa>
Reply-to: okeefe.r.a. <okeefe.r.a.%edxa@ucl-cs>
Subject: Rational Psychology

     If you were thinking of saying something about "Rational Psychology"
and haven't read the article, PLEASE restrain yourself.  It appeared in
Volume 4 Issue 3 (Autumn 83) of "The AI Magazine", and is pages 50-54 of
that issue.  It isn't hard to get AI Magazine.  AAAI members get it.  I'm
not a member, but DAI Edinburgh has a subscription and I read it in the
library.  I am almost tempted to join AAAI for the AI magazine alone, it
is good value.

     The "Rational" in Rational Psychology modifies Psychology the same
way Rational modifies Mechanics in Rational Mechanics or Thermodynamics
in Rational Thermodynamics.  It does NOT contrast with "the psychology
of emotion" but with Experimental Psychology or Human Psychology.  Here
is a paragraph from the paper in question:

"    The aim of rational psychology is understanding, just as in any
other branch of mathematics.  Where much of what is labelled "mathematical
psychology" consists of microscopic mathematical problems arising in the
non-mathematical prosecution of human psychology, or in the exposition of
informal theories with invented symbols substituting for equally precise
words, rational psychology seeks to understand the structure of
psychological concepts and theories by means of the most fit mathematical
concepts and strict proofs, by suspiciously analyzing the informally
developed notions to reveal their essence and structure, to allow debate
on their interpretation to be phrased precisely, with consequences of
choices seen mathematically.  The aim is not simply to further informal
psychology, but to understand it instead, not necessarily to solve
problems as stated, but to see if they are proper problems at all by
investigating their formulations. "

     There is nothing in this, or any other part of the paper, that would
exclude the study of emotions from Rational Psychology.  Indeed, unless or
until we encounter another intelligent race, Rational Psychology seems to
offer the only way to telling whether there are emotions that human beings
cannot experience.

     My only criticism of Doyle's programme (note spelling, I am not
talking about a computer program) is that I think we are as close to a
useful Rational Psychology as Galileo was to Rational Mechanics or Carnot
was to Rational Thermodynamics.  I hope other people disagree with me and
get cracking on it.  Any progress at all in this area would be useful.

------------------------------

Date: Thu, 27 Oct 83 07:50:56 pdt
From: ihnp4!utcsrgv!dave@Berkeley
Subject: Computers and the Law

Dalhousie University is sponsoring a computer conference under
CONFER on an MTS system at Wayne State University in Michigan.
The people in the conference include lawyers interested in computers
as well as computer science types interested in law.

Topics of discussion include computer applications to law, legal issues
such as patents, copyrights and trade secrets in the context of computers,
CAI in legal education, and AI in law.

For those who aren't familiar with Confer, it provides a medium which
is somewhat more structured than Usenet for discussions. People post
"items", and "discussion responses" are grouped chronologically (and
kept forever) under the item. All of the files are on one machine only.

The conference is just starting up. Dalhousie has obtained a grant to
fund everyone's participation, which means anyone who is interested
can join for free. Access is through Telenet or Datapac, and the
collect charges are picked up by the grant.

If anyone is interested in joining this conference (called Law:Forum),
please drop me a line.

        Dave Sherman
        The Law Society of Upper Canada
        Osgoode Hall
        Toronto, Ont.
        Canada  M5H 2N6
        (416) 947-3466

decvax!utzoo!utcsrgv!dave@BERKELEY  (ARPA)
{ihnp4,cornell,floyd,utzoo} !utcsrgv!dave  (UUCP)

------------------------------

Date: Thu 27 Oct 83 10:22:48-PDT
From: WYLAND@SRI-KL.ARPA
Subject: FORTH Convention Proceedings

I have been told that there will be no formal proceedings of the
FORTH convention, but that articles will appear in "FORTH
Dimensions", the magazine/journal of the FORTH Interest Group.
This journal publishes technical articles about FORTH methods and
techniques, algorithms, applications, and standards.  It is
available for $15.00/year from the following address:

        FORTH Interest Group
        P.O. Box 1105
        San Carlos, CA 94070
        415-962-8653

As you may know, Mountain View Press carries most of the
available literature for FORTH, including the proceedings of the
various technical conferences such as the FORTH Application
Conferences at the University of Rochester and the FORML
conferences.  I highly reccommend them as a source of FORTH
literature.  Their address is:

        Mountain View Press, Inc.
        P.O. Box 4656
        Mountain View, CA 94040
        415-961-4103

I hope this helps.

Dave Wyland
WYLAND@SRI

------------------------------

Date: Wednesday, 26 October 1983 14:55 edt
From: TJMartin.ADL@MIT-MULTICS.ARPA (Thomas J. Martin)
Subject: Seminar Announcement

PLACE:    Arthur D. Little, Inc.
          Acorn Park (off Rte. 2 near Rte. 2/Rte. 16 rotary)
          Cambridge MA

DATE:     October 31, 1983

TIME:     8:45 AM, ADL Auditorium

TOPIC:    "Artificial Intelligence at ADL -- Activities, Progress, and Plans"

SPEAKER:  Dr. Karl M. Wiig, Director of ADL AI Program

ABSTRACT: ADL'ss AI program has been underway for four months.  A core group
          of staff has been recruited from several sections in the company
          and trained.  Symbolics 3600 and Xerox 1100 machines have been
          installed and are now operational.

          The seminar will discuss research in progress at ADL in:
          expert systems, natural language, and knowledge engineering tools.

------------------------------

Date: Wed 26 Oct 83 20:11:52-PDT
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: CS Colloq, Tues 11/1   Jussi Ketonen

                [Reprinted from the SU-SCORE bboard.]

CS Colloquium, Tuesday, November 1, 4:15pm Terman Auditorium
(refreshments at 3:45 at the 3rd floor lounge of MJH)

SPEAKER:  Dr. Jussi Ketonen, Stanford University CS Department

TITLE: A VIEW OF THEOREM-PROVING

        I'll be  discussing the  possibility of  developing  powerful
expert systems for mathematical reasoning - a domain characterized by
highly abbreviated  symbolic manipulations  whose logical  complexity
tends  to be rather  low. Of  particular interest will  be the proper
role  of meta theory, high-order  logic, logical decision procedures,
and  rewriting.  I  will   argue  for  a  different,  though  equally
important, role for the widely misunderstood notion of meta theory.
        Most of the discussion takes  place in the context of EKL, an
interactive theorem-proving system  under development at Stanford. It
has  been used to  prove facts about Lisp  programs and combinatorial
set theory.
        I'll  describe some of  the features of the  language of EKL,
the  underlying  rewriting   system,  and  the  algorithms  used  for
high-order unification with some examples.

------------------------------

End of AIList Digest
********************

∂28-Oct-83  1402	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #84
Received: from SRI-AI by SU-AI with TCP/SMTP; 28 Oct 83  14:00:25 PDT
Date: Friday, October 28, 1983 8:59AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #84
To: AIList@SRI-AI


AIList Digest            Friday, 28 Oct 1983       Volume 1 : Issue 84

Today's Topics:
  Metaphysics - Split Consciousness,
  Halting Problem - Discussion,
  Intelligence - Recursion & Parallelism & Consciousness
----------------------------------------------------------------------

Date: 24 Oct 83 20:45:29-PDT (Mon)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: consciousness and the teleporter - (nf)
Article-I.D.: uiucdcs.3417


See also the 17th and final essay by Daniel Dennett in his book Brainstorms
[Bradford Books, 1978].  The essay is called "Where Am I," and investigates
exactly this question of "split consciousness."

------------------------------

Date: Thu 27 Oct 83 23:04:47-MDT
From: Stanley T. Shebs <SHEBS@UTAH-20.ARPA>
Subject: Semi-Summary of Halting Problem Discussion

Now that the discussion on the Halting Problem etc has died down,
I'd like to restate the original question, which seems to have been
misunderstood.

The question is this: consider a learning program, or any program
that is self-modifying in some way.  What must I do to prevent it
from getting caught in an infinite loop, or a stack overflow, or
other unpleasantnesses?  For an ordinary program, it's no problem
(heh-heh), the programmer just has to be careful, or prove his
program correct, or specify its operations axiomatically, or <insert
favorite software methodology here>.  But what about a program
that is changing as it runs?  How can *it* know when it's stuck
in a losing situation?

The best answers I saw were along the lines of an operating system
design, where a stuck process can be killed, or pushed to the bottom
of an agenda, or whatever.  Workable, but unsatisfactory.  In the case
of an infinite loop (that nastiest of possible errors), the program
can only guess that it has created a situation where infinite loops
can happen.

The most obvious alternative is to say that the program needs an "infinite
loop detector".  Ted Jardine of Boeing tells a story where, once upon
a time, some company actually tried to do this - write a program that
would detect infinite loops in any other program.  Of course, this is
ludicrous; it's a version of the Halting Problem.  For loops in a
program under a given length, yes; arbitrary programs, no.  So our
self-modifying program can manage only a partial solution, but that's
ok, because it only has to be able to analyze itself and its subprograms.

The question now becomes:  can a program of length n detect infinite
loops in any program of length <= n ?  I don't know; you can't just
have it simulate itself and watch for duplicated states showing up,
because the extra storage for the inbetween states would cause the
program to grow! and you have violated the initial conditions for the
question.  Some sort of static analysis could detect special cases
(like the Life blinkers mentioned by somebody), but I doubt that
all cases could be done this way.  Any theory types out there with
the answer?

Anyway, I *don't* think these are vacuous problems;  I encountered them
when working on a learning capability for my parser, and "solved" them
by being very careful about rules that expanded the sentence, rather
than reducing (really just context-sensitive vs context-free).
Am facing it once again in my new project (a KR language derived from
RLL), and this time there's no way to sidestep!  Any new ideas would
be greatly appreciated.

                                                Stan Shebs

------------------------------

Date: Wed, 26 Oct 1983  16:30 EDT
From: BATALI%MIT-OZ@MIT-MC.ARPA
Subject: Trancendental Recursion


I've just joined this mailing list and I'm wondering about the recent
discussion of "consciousness."  While it's an interesting issue, I
wonder how much relevance it has for AI.  Thomas Nagel's article "What
is it like to be a bat?" argues that consciousness might never be the
proper subject of scientific inquiry because it is by its nature,
subjective (to the max, as it were) and science can deal with only
objective (or at least public) things.

Whatever the merits of this argument, it seems that a more profitable
object of our immediate quest might be intelligence.  Now it may be the
case that the two are the same thing -- or it may be that consciousness
is just "what it is like" to be an intelligent system.  On the other
hand, much of our "unconscious" or "subconscious" reasoning is very
intelligent.  Consider the number of moves that a chess master doesn't
even consider -- they are rejected even before being brought to
consciousness.  Yet the action of rejecting them is a very intelligent
thing to do.  Certainly someone who didn't reject those moves would have
to waste time considering them and would be a worse (less intelligent?)
chess player.  Conversly it seems reasonable to suppose that one cannot
be conscious unless intelligent.

"Intelligent" like "strong" is a dispositional term, which is to say it
indicates what an agent thus described might do or tend to do or be able
to do in certain situations.  Whereas it is difficult to give a sharp
boundary between the intelligent and the non-intelligent, it is often
possible to say which of two possible actions would be the more
intelligent.

In most cases, it is possible to argue WHY the action is the more
intelligent.  The argument will typically mention the goals of the
agent, its abilities, and its knowldge about the world.  So it seems
that there is a fairly simple and common understanding of how the term
is applied:  An action is intelligent just in case it well satisfies
some goals of the agent, given what the agent knows about the world.  An
agent is intelligent just in case it performs actions that are
intelligent for it to perform.

A potential problem with this is that the proposed account requires that
the agent often be able to figure out some very difficult things on the
way to generating an intelligent action:  Which goal should I satisfy?
What is the case in the world?  Should I try to figure out a better
solution?  Each of these subproblems, constituitive of intelligence,
seems to require intelligence.

But there is a way out, and it might bring us back to the issue of
consciousness.  If the intelligent system is a program, there is no
problem with its applying itself recursively to its subproblems.  So the
subproblems can also be solved intelligently.  For this to work, though,
the program must understand itself and understand when and how to apply
itself to its subproblems.  So at least some introspective ability seems
like it would be important for intelligence, and the better the system
was at introspective activities, the more intelligent it would be.  The
recent theses of Doyle and Smith seem to indicate that a system could be
COMPLETELY introspective in the sense that all aspects of its operation
could be accessible and modifiable by the program itself.

But I don't know if it would be conscious or not.

------------------------------

Date: 26 Oct 1983 1537-PDT
From: Jay <JAY@USC-ECLC>
Subject: Re: Parallelism and Conciousness

Anything that  can  be  done  in parallel  can  be  done  sequentially.
Parallel  computations  can   be  faster,   and  can   be  easier   to
understand/write.  So if conciousness can  be programmed, and if it  is
as complex as it seems, then perhaps parallelism should be  exploited.
No algorithm is inherently parallel.

j'

------------------------------

Date: Thu 27 Oct 83 14:01:59-EDT
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Parallelism & Consciousness


     From: BUCKLEY@MIT-OZ
     Subject: Parallelism and Consciousness

     -- of what relevance is the issue of time-behavior of an algorithm to
     the phenomenon of intelligence, i.e., can there be in principle such a
     beast as a slow, super-intelligent program?

gracious, isn't this a bit chauvinistic?  suppose that ai is eventually
successful in creating machine intelligence, consciousness, etc. on
nano-second speed machines of the future:  we poor humans, operating
only at rates measured in seconds and above, will seem incredibly slow
to them.  will they engage in debate about the relevance of our time-
behavior to our intelligence?  if there cannot in principle be such a
thing as a slow, super-intelligent program, how can they avoid concluding
that we are not intelligent?
                                        -=*=- rick

------------------------------


Mail-From: DUGHOF created at 27-Oct-83 14:14:27
Date: Thu 27 Oct 83 14:14:27-EDT
From: DUGHOF@MIT-OZ
Subject: Re: Parallelism & Consciousness
To: RICKL@MIT-OZ
In-Reply-To: Message from "RICKL@MIT-OZ" of Thu 27 Oct 83 14:04:28-EDT

About slow intelligence -- there is one and only one reason to have
intelligence, and that is to survive.  That is where intelligence
came from, and that is what it is for.  It will do no good to have
a "slow, super-intelligent program", for that is a contradiction in
terms.  Intelligence has to be fast enough to keep up with the
world in real time.  If the superintelligent AI program is kept in
some sort of shielded place so that its real-time environment is
essentially benevolent, then it will develop a different kind of
intelligence from one that has to operate under higher pressures,
in a faster-changing world.  Everybody has had the experience of
wishing they'd made some clever retort to someone, but thinking of
it too late.  Well, if you always thought of those clever remarks
on the spot, you'd be smarter than you are.  If things that take
time (chess moves, writing good articles, developing good ideas)
took less time, then I'd be smarter.  Intelligence and the passage
of time are not unrelated.  You can slow your processor down and
then claim that your program's intelligence is unaffected, even
if it's running the same program.  The world is marching ahead at
the same speed, and "pure, isolated intelligence" doesn't exist.

------------------------------

Date: Thu 27 Oct 83 14:57:18-EDT
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Re: Parallelism & Consciousness


    From: DUGHOF@MIT-OZ
    Subject: Re: Parallelism & Consciousness
    In-Reply-To: Message from "RICKL@MIT-OZ" of Thu 27 Oct 83 14:04:28-EDT

    About slow intelligence -- there is one and only one reason to have
    intelligence, and that is to survive....  It will do no good to have
    a "slow, super-intelligent program", for that is a contradiction in
    terms.  Intelligence has to be fast enough to keep up with the
    world in real time.

are you claiming that if we someday develop super-fast super-intelligent
machines, then we will no longer be intelligent?  this seems implicit in
your argument, and seems itself to be a contradiction in terms:  we *were*
intelligent until something faster came along, and then after that we
weren't.

or if this isn't strong enough for you -- you seem to want intel-
ligence to depend critically on survival -- imagine that the super-fast
super-intelligent computers have a robot interface, are malevolent,
and hunt us humans to extinction in virtue of their superior speed
& reflexes.  does the fact that we do not survive mean that we are not
intelligent?  or does it mean that we are intelligent now, but could
suddenly become un-intelligent without we ourselves changing (in virtue
of the world around us changing)?

doubtless survival is important to the evolution of intelligence, & that
point is not really under debate.  however, to say that whether something is
or is not intelligent is a property dependent on the relative speed of the
creatures sharing your world seems to make us un-intelligent as machines
and programs get better, and amoebas intelligent as long as they were
the fastest survivable thing around.

                        -=*=- rick

------------------------------

Date: Thu, 27 Oct 1983  15:26 EDT
From: STRAZ%MIT-OZ@MIT-MC.ARPA
Subject: Parallelism & Consciousness


    Hofstadter:
    About slow intelligence -- there is one and only one [...]

  Lathrop:
  doubtless survival is important to the evolution of intelligence, &
  that point is not really under debate.

Me:
No, survival is not the point. It is for the organic forms that
evolved with little help from outside intelligences, but a computer
that exhibits a "slow, super-intelligence" in the protective
custody of humans can solve problems that humans might never
be able to solve (due to short attention span, lack of short-term
memory, tedium, etc.)

For example, a problem like where to best put another bridge/tunnel
in Boston is a painfully difficult thing to think about, but if
a computer comes up with a good answer (with explanatory justifications)
after thinking for a month, it would have fulfilled anyone's
definition of slow, superior intelligence.

------------------------------

Date: Thu, 27 Oct 1983  23:35 EDT
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Parallelism & Consciousness


That's what you get for trying to define things too much.

------------------------------

End of AIList Digest
********************

∂31-Oct-83  1445	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #85
Received: from SRI-AI by SU-AI with TCP/SMTP; 31 Oct 83  14:44:29 PST
Date: Monday, October 31, 1983 9:18AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #85
To: AIList@SRI-AI


AIList Digest            Monday, 31 Oct 1983       Volume 1 : Issue 85

Today's Topics:
  Intelligence
----------------------------------------------------------------------

Date: Fri 28 Oct 83 13:43:21-EDT
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Re: Parallelism & Consciousness

    From: MINSKY@MIT-OZ

    That's what you get for trying to define things too much.

what do i get for trying to define what too much??

though obviously, even asking that question is trying to define
your intent too much, & i'll only get more of whatever i got for
whatever it was i got it for.

                        -=*=-

------------------------------

Date: 28 Oct 1983 12:02-PDT
From: ISAACSON@USC-ISI
Subject: Re: Parallelism & Consciousness


  From Minsky:
  That's what you get for trying to define things too much.

Coming, as it does, out of the blue, your comment appears to
negate the merits of this discussion.  The net effect might
simply be to bring it to a halt.  I think that it is, inadvertent
though it might be, unkind to the discussants, and unfair to the
rest of us who are listening in.

I agree.  The level of confusion is not insignificant and
immediate insights are not around the corner.  However, in my
opinion, we do need serious discussion of these issues.  I.e.,
questions of subcognition vs.  cognition; parallelism,
"autonomy", and epiphenomena; algorithmic programability vs.
autonomy at the subcognitive and cognitive levels; etc.  etc.

Perhaps it would be helpful if you give us your views on some of
these issues, including your views on a good methodology to
discussing them.

-- JDI

------------------------------

Date: 30 Oct 83 13:27:11 EST  (Sun)
From: Don Perlis <perlis%umcp-cs@CSNet-Relay>
Subject: Re:  Parallelism & Consciousness


             From: BUCKLEY@MIT-OZ
             --  of  what relevance is the issue of time-behavior of an
             algorithm to the phenomenon  of  intelligence,  i.e.,  can
             there  be  in  principle  such  a  beast  as    a    slow,
             super-intelligent program?

        From: RICKL%MIT-OZ@mit-mc
        gracious,  isn't  this  a bit chauvinistic?  suppose that ai is
        eventually    successful   in  creating  machine  intelligence,
        consciousness, etc.   on  nano-second  speed  machines  of  the
        future:    we  poor humans, operating only at rates measured in
        seconds and above, will seem incredibly slow  to  them.    will
        they engage in debate about the relevance of our time- behavior
        to our intelligence?  if there cannot in principle  be  such  a
        thing  as a slow, super-intelligent program, how can they avoid
        concluding that we are not intelligent?  -=*=- rick

It seems to me that the issue isn't the 'appearance' of intelligence of
one being to another--after all, a very slow  thinker  may  nonetheless
think  very  effectively and solve a problem the rest of us get nowhere
with.  Rather I suggest that intelligence be regarded as effectiveness,
namely,  as coping with the environment.  Then real-time issues clearly
are significant.

A  supposedly brilliant algorithm that 'in principle' could decide what
to do about an impending disaster,  but  which  is  destroyed  by  that
disaster  long  before  it manages to grasp that there is a disaster,or
what its dimensions are, perhaps should not be called  intelligent  (at
least on the basis of *that* event).  And if all its potential behavior
is of this sort, so that it never really gets anything settled, then it
could  be  looked  at  as really out of touch with any grasp of things,
hence not intelligent.

Now  this  can be looked at in numerous contexts; if for instance it is
applied to the internal ruminations of the agent, eg  as  it  tries  to
settle  Fermat's  Last  Theorem, and if it still can't keep up with its
own physiology, ie,  its  ideas  form  and  pass  by  faster  than  its
'reasoning  mechanisms' can keep track of, then it there too will fail,
and I doubt we would want to say it 'really' was bright.  It can't even
be  said  to be trying to settle Fermat's Last theorem, for it will not
be able to keep that in mind.

This  is in a sense an internal issue, not one of relative speed to the
environment.  But considering that the internal and external events are
all  part  of  the  same  physical  world,  I  don't  see a significant
difference.  If the agent *can* keep track of  its  own  thinking,  and
thereby  stick  to the task, and eventually settle the theorem, I think
we would call it bright indeed,  at  least  in  that  domain,  although
perhaps  a moron in other matters (not even able to formulate questions
about them).

------------------------------

Date: Sun 30 Oct 83 16:59:12-EST
From: RICKL%MIT-OZ@MIT-MC.ARPA
Subject: Re:  Parallelism & Consciousness

    [...]

    From: Don Perlis <perlis%umcp-cs@CSNet-Relay>
    It seems to me that the issue isn't the 'appearance' of intelligence of
    one being to another....Rather I suggest that intelligence be regarded
    as effectiveness, namely,  as coping with the environment....

From this & other recent traffic on the net, the question we are really
discussing seems to be:  ``can an entity be said to be intelligent in and
of itself, or can an entity only be said to be intelligent relative to some
world?''.  I don't think I believe in "pure, abstract intelligence, divorced
from the world".  However, a consequence of the second position seems to
be that there should be possible worlds in which we would consider humans
to be un-intelligent, and I can't readily think of any (can anyone else?).

Leaving that question as too hard (at least for now), another question we
have been chasing around is:  ``can intelligence be regarded as survivability,
(or more generally as coping with an external environment)?''.  In the strong
form this position equates the two, and this position seems to be too
strong.  Amoebas cope quite well and have survived for unimaginably longer
than we humans, but are generally acknowledged to be un-intelligent (if
anyone cares to dispute this, please do).  Survivability and coping with
the environment, alone, therefore fail to adequately capture our intuitions
of intelligence.
                        -=*=- rick

------------------------------

Date: 30 Oct 1983 18:46:48 EST (Sunday)
From: Dave Mankins <dm@BBN-UNIX>
Subject: Re: Intelligence and Competition

By the survivability/adaptability criteria the cockroach must be
one of the most intelligent species on earth.  There's obviously
something wrong with those criteria.

------------------------------

Date: Fri 28 Oct 83 14:19:36-PDT
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Definition of Intelligence

I like the idea that the intelligence of an organism should be
measured relative to its goals (which usually include survival, but
not in the case of "smart" bombs and kamikaze pilots).  I don't think
that goal-satisfaction criteria can be used to establish the "relative
intelligence" of organisms with very different goals.  Can a fruit fly
be more intelligent than I am, no matter how well it satisfies its
goals?  Can a rock be intelligent if its goals are sufficiently
limited?

To illustrate this in another domain, let us consider "strength".  A
large bulldozer is stronger than a small one because it can apply more
brute force to any job that a bulldozer is expected to do.  Can we
say, though, that a bulldozer is "stronger" than a pile driver, or
vice versa?

Put another way: If scissors > paper > rock > scissors ..., does it
make any sense to ask which is "best"?  I think that this is the
problem we run into when we try to define intelligence in terms of
goals.  This is not to say that we can define it to be independent of
goals, but goal satisfaction is not sufficient.

Instead, I would define intelligence in terms of adaptability or
learning capability in the pursuit of goals.  An organism with hard-
wired responses to its environment (e.g. a rock, a fruit fly, MACSYMA)
is not intelligent because it does not adapt.  I, on the other hand,
can be considered intelligent even if I do not achieve my goals as
long as I adapt to my environment and learn from it in ways that would
normally enhance my chances of success.

Whether speed of response must be included as a measure of
intelligence depends on the goal, but I would say that, in general,
rapid adaptation does indicate greater intelligence than the same
response produced slowly.  Multiple choice aptitude tests, however,
exercise such limited mental capabilities that a score of correct
answers per minute is more a test of current knowledge than of ability
to learn and adapt within the testing period.  Knowledge relative to
age (IQ) is a useful measure of learning ability and thus of
intelligence, but cannot be used for comparing different species.  I
prefer unlimited-time "power" tests for measuring both competence and
intelligence.

The Turing test imposes a single goal on two organisms, namely the
goal of convincing an observer at the other end of tty that he/it is
the true human.  This will clearly only work for organisms capable
of typing at human speed and capable of accepting such a goal.  These
conditions imply that the organism must have a knowledge of human
psychology and capabilities, or at least a belief (probably incorrect)
that it can "fake" them.  Given such a restricted situation, the
nonhuman organism is to be judged intelligent if it can appropriately
modify its own behavior in response to questioning at least as well as
the human can.  (I would claim that a nonadapting organism hasn't a
chance of passing the test, and that this is just what the observer
will be looking for.)

I do not believe that a single test can be devised which can determine
the relative intelligences of arbitrary organisms, but the public
wants such a test.  What shall we give them?  I would suggest the
following procedure:

For two candidate organisms, determine a goal that both are capable
of accepting and that we consider related to intelligence.  For an
interesting test, the goal must be such that neither organism is
specially adapted or maladapted for achieving it.  The goal might be
absolute (e.g., learn 100 nonsense syllables) or relative (e.g.,
double your vocabulary).  If no such goal can be found, the relative
organisms cannot be ranked.  If a goal is found, we can rank them
along the dimension of the indicated behavior and we can infer a
similar ranking for related behaviors (e.g., verbal ability).  The
actual testing for learning ability is relatively simple.

How can we test a computer for intelligence?  Unfortunately, a computer
can be given a wide variety of sensors and effectors and can be made
to accept almost any goal.  We must test it for human-level adaptability
in using all of these.  If it cannot equal human ability nearly all
measurable scales (e.g., game playing, verbal ability, numerical
ability, learning new perceptual and motor skills, etc.), it cannot
be considered intelligent in the human sense.  I know that this is
exceedingly strict, but it is the same test that I would apply to
decide whether a child, idiot savant, or other person were intelligent.
On the other hand, if I could not match the computer's numerical and
memory capabilities, it has the right to judge me unintelligent by
computer standards.

The intelligence of a particular computer program, however, should
be judged by much less stringent standards.  I do not expect a
symbolic algebra program to learn to whistle Dixie.  If it can
learn, without being programmed, a new form of integral faster
than I can, or if it can find a better solution than I can in
any length of time, then I will consider it an intelligent symbolic
algebra program.  Similar criteria apply to any other AI program.

I have left open the question of how to measure adaptability,
relative importance of differing goals, parallel satisfaction of
multiple goals, etc.  I have also not discussed creativity, which
involves autonomous creation of new goals.  Have I missed anything,
though, in the basic concept of intelligence?

                                        -- Ken Laws

------------------------------

Date: 30 Oct 1983 1456-PST
From: Jay <JAY@USC-ECLC>
Subject: Re:  Parallelism & Consciousness

    From: RICKL%MIT-OZ@MIT-MC.ARPA

                ...
    the question we are really discussing seems to be: ``can an entity be
    said to be intelligent in and of itself, or can an entity only be said
    to be intelligent relative to some world?''.  I don't think I believe
    in "pure, abstract intelligence, divorced from the world".
                ...
    another question we have been chasing around is: ``can intelligence be
    regarded as survivability, (or more generally as coping with an
    external environment)?''.  [...]

  I believe intelligence to be the  ability to cope with CHANGES in  the
enviroment.  Take  desert tortoises,  although  they are  quite  young
compared to  amobea,  they  have  been  living  in  the  desert  some
thousands, if  not  millions  of  years.   Does  this  mean  they  are
intelligent? NO! put a freeway through their desert and the  tortoises
are soon dying.  Increase the rainfall and they may become unable  to
compete with  the rabbits  (which  will take  full advantage  of  the
increase in vegitation and produce  an increase in rabbit-ation).   The
ability to cope with  a CHANGE in  the enviroment marks  intellignece.
All a tortoise need do is not  cross a freeway, or kill baby  rabbits,
and then they could  begin to claim  intellignce.  A similar  argument
could be made against intelligent amobea.

  A posible problem with this view  is that biospheres can be  counted
intelligent, in the desert an increase  in rainfall is handled by  an
increase in vegetation, and then  in herbivores (rabbits) and then  an
increase in carnivores (coyotes).  The end result is not the end of  a
biosphere,  but  the  change  of  a  biosphere.   The  biosphere   has
successfully coped  with  a  change  in  its  environment.   Even  more
ludicrous, an argument  could be  made for an  intelligent planet,  or
solar system, or even galaxy.

  Notice, an  organism  that  does  not  change  when  its  environment
changes,  perhaps  because  it  does  not  need  to,  has  not   shown
intelligence.  This is,  of course,  not to say  that that  particular
organism is  un-intelligent.   Were  the world  to  become  unable  to
produce rainbows, people would change little, if at all.

My behavioralism is showing,
j'

------------------------------

Date: Sun, 30 Oct 1983  18:11 EST
From: JBA%MIT-OZ@MIT-MC.ARPA
Subject: Parallelism & Consciousness

    From: RICKL%MIT-OZ at MIT-MC.ARPA
    However, a consequence of the second position seems to
    be that there should be possible worlds in which we would consider humans
    to be un-intelligent, and I can't readily think of any (can anyone else?).

Read the Heinlein novel entitled (I think) "Have Spacesuit, Will
Travel."  Somewhere in there a race tries to get permission to
kill humans  wantonly, arguing that they're basically stupid.  Of
course, a couple of adolscent humans who happen to be in the neighborhood
save the day by proving that they're smart.  (I read this thing a long
time ago, so I may have the story and/or title a little wrong.)

        Jonathan

[Another story involves huge alien "energy beings" taking over the earth.
They destroy all human power sources, but allow the humans to live as
"cockroaches" in their energy cities.  One human manages to convince an
alien that he is intelligent, so the aliens immediately begin a purge.
Who wants intelligent cockroaches?  -- KIL]

------------------------------

Date: Sun 30 Oct 83 15:41:18-PST
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Intelligence and Competition

           From: RICKL%MIT-OZ@MIT-MC.ARPA
        I don't think I believe in "pure, abstract intelligence, divorced
    from the world".  However, a consequence of the second position seems to
    be that there should be possible worlds in which we would consider humans
    to be un-intelligent, and I can't readily think of any (can anyone else?).

           From: Jay <JAY@USC-ECLC>
           ...Take  desert tortoises,  [...]

Combining these two comments, I came up with this:

            ...Take American indians, although they are quite young compared
      to amoeba, they have been living in the desert some thousands of years.
      Does this mean they are intelligent? NO! Put a freeway (or some barbed
      wire) through their desert and they are soon dying. Increase cultural
      competition and they may be unable to compete with the white man (which
      will take full advantage of their lack of guns and produce an
      increase in white-ation). The ability to cope with CHANGE in the
      environment marks intelligence.

I think that the stress on "adaptability" makes for some rather strange
candidates for intelligence. The indians were developing a cooperative
relationship with their environment, rather than a competitive one; I cannot
help but think that our cultural stress on competition has biased us
towards competitive definitions of intelligence.

    Survivability has many facets, and competition is only one of them, and
may not even be a very large one. Perhaps before one judges intelligence on
how systems cope with change, how about intelligence with how the systems
cope with stasis? While it is popular to think about how the great thinkers
of the past arose out of great trials, I think that more of modern knowledge
came from times of relative calm, when there was enough surplus to offer
a group of thinkers time to ponder.

David

------------------------------

End of AIList Digest
********************

∂31-Oct-83  1951	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #86
Received: from SRI-AI by SU-AI with TCP/SMTP; 31 Oct 83  19:50:24 PST
Date: Monday, October 31, 1983 9:53AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #86
To: AIList@SRI-AI


AIList Digest            Monday, 31 Oct 1983       Volume 1 : Issue 86

Today's Topics:
  Complexity Measures - Request,
  Obituary - Alfred Tarski,
  Seminars - Request for Synopses,
  Discourse Analysis - Representation,
  Review - JSL Review of GEB,
  Games - ACM Chess Results,
  Software Verification - VERUS System Offered,
  Conferences - FGCS Call for Papers
----------------------------------------------------------------------

Date: 24 October 1983 17:02 EDT
From: Karl A. Nyberg <KARL @ MIT-MC>
Subject: writing analysis

I am interested in programs that people might know of that give word
distributions, sentence lengths, etc., so as to gauge the complexity of
articles.  I'd also like to know if anyone could point me to any models
that specify that complexity in terms of these sorts of measurements.
Let me know if any programs you might know of are particular to any text
formatter, programming language, or operating system.  Thanks.

-- Karl --

[Such capabilities are included in recent versions of the Unix
operating system. -- KIL]

------------------------------

Date: Sun 30 Oct 83 16:46:39-CST
From: Lauri Karttunen <Cgs.Lauri@UTEXAS-20.ARPA>
Subject: Alfred Tarski

                [Reprinted from the UTexas-20 bboard.]

Alfred Tarski, the father of model-theoretic semantics, died last
Wednesday at the age of 82.

------------------------------

Date: Fri, 28 Oct 83 21:29:41 pdt
From: sokolov%Coral.CC@Berkeley
Subject: Re: talk announcements in net.ai

Ken, I would like to submit this message as a suggestion to the
AIlist readership:

This message concerns the rash of announcements of talks being given
around the country (probably the world, if we include Edinburgh).  I am
one of those people that like to know what is going on elsewhere, so I
welcome the announcements.  Unfortunately, my appetite is only whetted
by them.  Therefore, I would like to suggest that, WHENEVER possible,
summaries of these talks should be submitted to the net.  I realize
that this isn't always practical, nevertheless, I would like to
encourage people to  submit these talk reviews.

                                Jeff Sokolov
                                Program in Cognitive Science
                                  and Department of Psychology
                                UC Berkeley
                                sokolov%coral@berkeley
                                ...!ucbvax!ucbcoral:sokolov

------------------------------

Date: 29 Oct 83  1856 PDT
From: David Lowe <DLO@SU-AI>
Subject: Representation of reasoning

I have recently written a paper that might be of considerable interest
to the people on this list.  It is about a new form of structuring
interactions between many users of an interactive network, based on an
explict representation of debate.  Although this is not a typical AI
problem, it is related to much AI work on the representation of language
or reasoning (for example, the representation of a chain of reasoning in
expert systems).  The representation I have chosen is based on the work
of the philosopher Stephen Toulmin.  I am also sending a version of this
message to HUMAN-NETS, since one goal of the system is to create a
lasting, easily-accessed representation of the interactions which occur
on discussion lists such as HUMAN-NETS or AIList.

A copy of the paper can be accessed by FTP from SAIL (no login required).
The name of the file is PAPER[1,DLO].  You can also send me a message
(DLO @ SAIL) and I'll mail you a copy.  If you send me your U.S. mail
address, I'll physically mail you a carefully typeset version.  Let
me know if you are interested, and I'll keep you posted about future
developments.  The following is an abstract:

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

             THE REPRESENTATION OF DEBATE AS A BASIS
              FOR INFORMATION STORAGE AND RETRIEVAL

                          By David Lowe
                   Computer Science Department
             Stanford University, Stanford, CA 94305

                             Abstract

Interactive computer networks offer the potential for creating a body
of information on any given topic which combines the best available
contributions from a large number of users.  This paper describes a
system for cooperatively structuring and evaluating information through
well-specified interactions by many users with a common database.  A
working version of the system has been implemented and examples of its use
are presented.  At the heart of the system is a structured representation
for debate, in which conclusions are explicitly justified or negated by
individual items of evidence.  Through debates on the accuracy of
information and on aspects of the structures themselves, a large number of
users can cooperatively rank all available items of information in terms
of significance and relevance to each topic.  Individual users can then
choose the depth to which they wish to examine these structures for the
purposes at hand.  The function of this debate is not to arrive at
specific conclusions, but rather to collect and order the best available
evidence on each topic.  By representing the basic structure of each field
of knowledge, the system would function at one level as an information
retrieval system in which documents are indexed, evaluated and ranked in
the context of each topic of inquiry.  At a deeper level, the system would
encode knowledge in the structure of of the debates themselves.  This use
of an interactive system for structuring information offers many further
opportunities for improving the accuracy, accessibility, currency,
conciseness, and clarity of information.

------------------------------

Date: 28 Oct 83 19:06:50 EDT  (Fri)
From: Bruce T. Smith <bts%unc@CSNet-Relay>
Subject: JSL review of GEB

     The most recent issue (Vol. 48, Number 3, September
1983) of the Journal of Symbolic Logic (JSL) has an
interesting review of Hofstadter's book "Godel, Escher,
Bach: an eternal golden braid.". (It's on pages 864-871, a
rather long review for the JSL.  It's by Judson C. Webb, a
name unfamiliar to me, amateur that I am.)
     This is a pretty favorable review-- I know better than
to start any debates over GEB-- but what I found most
interesting was its emphasis on the LOGIC in the book.  Yes,
I know that's not all GEB was about, but it was unusual
to read a discussion of it from this point of view.  Just to
let you know what to expect, Webb's major criticism is
Hofstadter's failure, in a book on self-reference, to dis-
cuss Kleene's fixed-point theorem,

     which fuses these two phenomena so closely together.
     The fixed-point theorem shows (by an adaptation of
     Godel's formal diagonalization) that the strangest ima-
     ginable conditions on functions have solutions computed
     by self-referential machines making essential use of
     their own Godel-numbers, provided only that the condi-
     tions are expressible by partial recursive functions.

He also points out that Hofstadter didn't show quite how
shocking Godel's theorems were: "In short, Godel discovered
the experimental completeness of a system that seemed almost
too weak to bother with, and the theoretical incompleteness
of one that aimed only at experimental completeness."
     Enough.  I'm not going to type the whole 7.5 pages.  Go
look for the newest issue of the JSL-- probably in your
Mathematics library.  For any students out there, membership
in the Association for Symbolic Logic is only $9.00/yr and
includes the JSL.  Last year they published around 1000
pages.  It's mostly short technical papers, but they claim
they're going to do more expository stuff.  The address to
write to is

          The Association for Symbolic Logic
          P.O.Box 6248
          Providence, RI  02940

============================================
Bruce Smith, UNC-Chapel Hill
...!decvax!duke!unc!bts     (USENET)
bts.unc@CSnet-Relay (from other NETworks)

------------------------------

Date: 27 October 1983 1130-EDT
From: Hans Berliner at CMU-CS-A
Subject: ACM chess results

                  [Reprinted from the CMU-C bboard.]

The results of the ACM World computer CHess Championship are:
CRAY BLITZ - 4 1/2      1st place
BEBE - 4                2nd
AWIT - 4                3rd
NUCHESS - 3 1/2         4th
CHAOS   - 3 1/2         5th
BELLE - 3               6th

There were  lots of  others with  3 points.   Patsoc finished  with a
scoreof  1.5 -  3.5.   It did  not play  any micros  and was  usually
outgunned by 10 mip mainframes.  There was a lot of excitement in the
last 3  rounds.  in  round 3 NUCHESS  defeated Belle (the  first time
Belle had lost to a machine).   In round 4 Nuchess drew Cray Blitz in
a long struggle when they were both tied for the lead and remained so
at 3 1/2 points  after this round.  The final  round was really wild:
BEBE upset NUCHESS  (the first time it had ever  beaten Nuchess) just
when NUCHESS looked to have a lock on the tournament.  CRAY Blitz won
from Belle when the latter rejected a draw because it had been set to
play for  a win at all  costs (Belle's only chance,  but this setting
was a mistake  as CRAY BLITZ also had  to win at all costs).   In the
end AWIT snuck into 3 rd  place in all this commotion, without having
every played any of the contenders.  One problem with a Swiss pairing
system used for  tournaments where only a few rounds  are possible is
that it  only brings out  a winner.  The  other scores are  very much
dependent on what happens in the last round.

Belle was using a new modification in search technique which based on
the results  could be thought  of as a mistake.   Probably it  is not
though, though possiby  the implementation was not the best.   In any
case Thompson  apparently thought he  had to do something  to improve
Belle for the tournament.

In any case,  it was not a  lost cause for Thompson.   He shared this
years  Turing award  with  Ritchie for  developing  UNIX, received  a
certificate  from the  US chess  federation for  the first  non-human
chess master (for Belle), and a  $16,000 award from the Common Wealth
foundation for  the invention  award of the  year (software)  for his
work on UNIX, C,  and Belle.  Lastly, it is  interesting to note that
this is the 4th world championship.  They are held 3 years apart, and
no program has won more than one of them.

------------------------------

Date: Mon, 17 Oct 83 10:41:19 CDT
From: wagner@compion-vms
Subject: Announcement:  VERUS verification system offered

         Use of the VERUS Verification System Offered
         --------------------------------------------

VERUS is a software design specification and verification system
produced by Compion Corporation, Urbana, Illinois.  VERUS was designed
for speed and ease of use.  The VERUS language is an extension of
of the first-order predicate calculus designed for a software
engineering environment.  VERUS includes a parser and a theorem prover.

Compion now offers use of VERUS over the MILNET/ARPANET.  Use is for a
maximum of 4 weeks.  Each user is provided with:

        1. A unique sign-on to Compion's VAX 11/750 running VMS

        2. A working directory

        3. Hard-copy user manuals for the use period.


If you are interested, contact Fran Wagner (wagner@compion-vms).
Note that the new numerical address for compion-vms is 10.2.0.55.

Please send the following information to help us prepare for you
to use VERUS:
               your name
               organization
               U.S. mailing address
               telephone number
               network address
               whether you are on the MILNET or the ARPPANET
               whether you are familiar with VMS
               whether you have a DEC-supported terminal
               desired starting date and length of use

We will notify you when you can log on and send you hard-copy user
documents including a language manual, a user's guide, and a guide
to writing state machine specifications.

After the network split, VERUS will be available over the MILNET
and, by special arrangement, over the ARPANET.

←←←←←←←←←←
VERUS is a trademark of Compion Corporation.
DEC, VAX, and VMS are trademarks of Digital Equipment Corporation.

------------------------------

Date: 26 Oct 1983 19:34:39-EDT
From: mac%mit-vax @ MIT-MC
Subject: FGCS Call for Papers

                         CALL FOR PAPERS

                            FGCS '84

International Conference on Fifth Generation Computer Systems, 1984

        Institute for New Generation Computer Technology

                November 6-9, 1984   Tokyo, Japan


The scope of technical sessions of  this  conference  encompasses
the  technical  aspects  of new generation computer systems which
are being explored particularly within  the  framework  of  logic
programming and novel architectures.  This conference is intended
to promote interaction among researchers in all  disciplines  re-
lated to fifth generation computer technology.  The topics of in-
terest include (but are not limited to) the following:


                          PROGRAM AREAS

Foundations for Logic Programs
  * Formal semantics/pragmatics
  * Computation models
  * Program analysis and complexity
  * Philosophical aspects
  * Psychological aspects

Logic Programming Languages/Methodologies
  * Parallel/Object-oriented programming languages
  * Meta-level inferences/control
  * Intelligent programming environments
  * Program synthesis/understanding
  * Program transformation/verification

Architectures for New Generation Computing
  * Inference machines
  * Knowledge base machines
  * Parallel processing architectures
  * VLSI architectures
  * Novel human-machine interfaces

Applications of New Generation Computing
  * Knowledge representation/acquisition
  * Expert systems
  * Natural language understanding/machine translation
  * Graphics/vision
  * Games/simulation

Impacts of New Generation Computing
  * Social/cultural
  * Educational
  * Economic
  * Industrial
  * International


                 ORGANIZATION OF THE CONFERENCE

Conference Chairman      : Tohru Moto-oka, Univ of Tokyo
Conference Vice-chairman : Kazuhiro Fuchi, ICOT
Program Chairman         : Hideo Aiso, Keio Univ
Publicity Chairman       : Kinko Yamamoto, JIPDEC
Secretariat              : FGCS'84 Secretariat, Institute for New
                           Generation Computer Technology (ICOT)
                           Mita Kokusai Bldg. 21F
                           1-4-28 Mita, Minato-ku, Tokyo 108, Japan
                           Phone: 03-456-3195  Telex: 32964 ICOT


                  PAPER SUBMISSION REQUIREMENTS

Four copies of manuscripts should be submitted by April 15, 1984 to :
        Prof. Hideo Aiso
        Program chairman
        ICOT
        Mita Kodusai Bldg. 21F
        1-4-28 Mita, Minato-ku
        Tokyo 108, Japan

Papers are restricted  to  20  double-spaced  pages  (about  5000
words) including figures.  Each paper must contain a 200-250 word
abstract.  Papers must be written and prensented in English.

Papers will be reviewed by international referees.  Authors  will
be notified of acceptance by June 30, 1984, and will be given in-
structions for final preparation of their papers  at  that  time.
Camera-ready  papers  for  the  proceedings should be sent to the
Program Chairman prior to August 31, 1984.

Intending authors are requested to return the attached reply card
with tentative subjects.


                       GENERAL INFORMATION

Date  : November 6-9, 1984
Venue : Keio Plaza Hotel, Tokyo, Japan
Host  : Institute for New Generation Computer Technology
Outline of the Conference Program :
        General Sessions
          Keynote speeches
          Report of research activities on Japan's FGCS Project
          Panel discussions
        Technical sessions (Parallel sessions)
          Presentation by invited speakers
          Presentation of submitted papers
        Special events
          Demonstration of current research results
          Technical visit
Official languages :
                English/Japanese
Participants:   600
Further information:
        Conference information will be available in December, 1983.


                     **** FGCS PROJECT ****

The Fifth Generation Computer Systems (FGCS) Project, launched in
April,  1982,  is  planned  to  span about ten years.  It aims at
realizing more user-friendly  and  intelligent  computer  systems
which  incorporate  inference and knowledge base management func-
tions based on innovative computer architecture, and  at  contri-
buting  thereby to future society.  The Institute for New Genera-
tion Computer Technology (ICOT) was established  as  the  central
research  institute of the project.  The ICOT Research Center be-
gan its research activities in June, 1982  with  the  support  of
government, academia and industry.

------------------------------

End of AIList Digest
********************

∂01-Nov-83  1649	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #87
Received: from SRI-AI by SU-AI with TCP/SMTP; 1 Nov 83  16:48:28 PST
Date: Tuesday, November 1, 1983 9:47AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #87
To: AIList@SRI-AI


AIList Digest            Tuesday, 1 Nov 1983       Volume 1 : Issue 87

Today's Topics:
  Rational Psychology - Definition,
  Parallel Systems,
  Conciousness & Intelligence,
  Halting Problem,
  Molecular Computers
----------------------------------------------------------------------

Date: 29 Oct 83 23:57:36-PDT (Sat)
From: hplabs!hao!csu-cs!denelcor!neal @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: denelcor.182

I see what you are saying and I beg to disagree.  I don't believe that
the distinction between rational and irrational psychology (it's probably
not that simple) as depending on whether or not the scientist is being
rational but on whether or not the subject is (or rather which aspect of
his behavior--or mentation if you accept the existence of that--is under
consideration).  More like the distinction between organic and inorganic
chemistry.

------------------------------

Date: Mon, 31 Oct 83 10:16:00 PST
From: Philip Kahn <kahn@UCLA-CS>
Subject: Sequential vs. parallel

 It was claimed that "parallel computation can always
be done sequentially."  I had thought that this naive concept had passed
away into never never land, but I suppose not.  I do not deny that MANY
parallel computations can be accomplished sequentially, yet not ALL
parallel computations can be made sequential.  Those class of parallel
computations that cannot be accomplished sequentially are those that
involve the state of all variables in a single instant.  This class
of parallelism often arises in sensor applications.  It would not be
valid, for example, to raster-scan (sequential computation) a sensing field
if the processing of that sensing field relied upon the quantization of
elements in a single instant.

     I don't want to belabor this point, but it should be recognized
that the common assertion that all parallel computation can be done
sequentially is NOT ALWAYS VALID.  In my own experience, I have found
that artificial intelligence (and real biologic intelligence for that
matter) relies heavily upon comparisons of various elements at a single
time instant.  As such, the assumption of sequentialty of parallelistic
algorithms is often invalid.  Something to think about.

------------------------------

Date: Saturday, 29 Oct 1983 21:05-PST
From: sdcrdcf!trw-unix!scgvaxd!qsi03!achut@rand-relay
Subject: Conciousness, Halting Problem, Intelligence


        I am new to this mailing list and I see there is some lively
discussion going on.  I am eager to contribute to it.

Consciousness:
        I treat the words self-awareness, consciousness, and soul as
synonyms in the context of these discussions.  They are all epiphenomena
of the phenomenon of intelligence, along with emotions, desires, etc.
To say that machines can never be truly intelligent because they cannot
have a "soul" is to be excessively naive and anthropocentric.  Self-
awareness is not a necessary prerequisite for intelligence; it arises
naturally *because* of intelligence.  All intelligent beings possess some
degree of self-awareness; to perceive and interact with the world, there
must be an internal model, and this invariably involves taking into
account the "self".  A very, very low intelligence, like that of a plant,
will possess a very, very low self-awareness.

Parallelism:
        The human brain resembles a parallel machine more than it does a
purely sequential one.  Parallel machines can do many things much quicker
than their sequential counterpart.  Parallel hardware may well make the
difference between the attainment of AI in the near future and the
unattainment for several decades.  But I cannot understand those who claim
that there is something *fundamentally* different between the two types of
architectures.  I am always amazed at the extremes to which some people will
go to find the "magic spark" which separates intelligence from non-
intelligence.  Two of these are "continuousness vs. discreteness" and
"non-determinism vs. determinism".
        Continuous?  Nothing in the universe is continuous. (Except maybe
arguments to the contrary :-))  Mass, energy, space and even time, at least
according to current physical knowledge, are all quantized.  Non-determinism?
Many people feel that "randomness" is a necessary ingredient to intelligence.
But why isn't this possible with a sequential architecture?  I can
construct a "discrete" random number generator for my sequential machine
so that it behaves in a similar manner to your "non-deterministic" parallel
machine, although perhaps slower. (See "Slow intelligence" below)
Perhaps the "magic sparkers" should consider that difference they are
searching for is merely one of complexity.  (I really hate to use the
word "merely", since I appreciate the vast scope of the complexity, but
it seems appropriate here)  There is no evidence, currently, to justify
thinking otherwise.

The Halting(?) Problem:
        What Stan referred to as the "Halting Problem" is really
the "looping problem", hence the subsequent confusion.  The Halting Problem
is not really relevant to AI, but the looping problem *is* relevant.  The
question is not even "why don't humans get caught in loops", since, as
Mr. Frederking aptly points out, "beings which aren't careful about this
fail to breed, and are weeded out by evolution".  (For an interesting story
of what could happen if this were not the case, see "Riddle of the universe
and its solution" by Christoper Cerniak in "The Mind's I")  But rather, the
more interesting question is "by what mechanisms do humans avoid them?",
and then, "are these the best mechanisms to use in AI programs?".
It not clear that this might not be a problem when AI is attempted on a
machine whose internal states could conceivably recur.  Now I am not saying
that this an insurmountable problem by any means; I am merely saying that
it might be a worthy topic of discussion.

Slow intelligence:
        Intelligence is dependent on time?  This would require a curious
definition of intelligence.  Suppose you played chess at strength 2000 given
5 seconds per move, 2010 given 5 minutes, and 2050 given as much time as you
desired.  Suppose the corresponding numbers for me were 1500, 2000, and 2500.
Who is the better (more intelligent) player?  True, I need 5 minutes per
move just to play as good as you can at only 5 seconds.  But shouldn't the
"high end" be compared instead?  There are many bases on which to decide the
"greater" of two intelligences.  One is (conceivably, but not exclusively)
speed.  Another is number and power of inferences it can make in a given
situation.  Another is memory, and ability to correlate current situations
with previous ones.  STRAZ@MIT-OZ has the right idea.  Incidentally, I'm
surprised that no one pointed out an example of an intelligence staring
us in the face which is slower but smarter than us all, individually.
Namely, this net!

------------------------------

Date: 25 Oct 83 13:34:02-PDT (Tue)
From: harpo!eagle!mhuxl!ulysses!cbosgd!cbscd5!pmd @ Ucb-Vax
Subject: Artificial Consciousness? [and Reply]

I'm interested in getting some feedback on some philosophical
questions that have been haunting me:

1) Is there any reason why developments in artificial intelligence
and computer technology could not someday produce a machine with
human consciousness (i.e. an I-story)?

2) If the answer to the above question is no, and such a machine were
produced, what would distinguish it from humans as far as "human"
rights were concerned?  Would it be murder for us to destroy such a
machine?  What about letting it die of natural (?) causes if we
have the ability to repair it indefinitely?
(Note:  Just having a unique, human genetic code does not legally make
one human as per the 1973 *Row vs Wade* Supreme Court decision on
abortion.)

Thanks in advance.

Paul Dubuc

[For an excellent discussion of the rights and legal status of AI
systems, see Marshal Willick's "Artificial Intelligence: Some Legal
Approaches and Implications" in the Summer '83 issue (V. 4, N. 2) of
AI magazine.  The resolution of this issue will of course be up to the
courts. -- KIL]

------------------------------

Date: 28 Oct 1983 21:01-PDT
From: fc%usc-cse%USC-ECL@SRI-NIC
Subject: Halting in learning programs

        If you restrict the class of things that can be learned by your
program to those which don't cause infinite recursion or circularity,
you will have a good solution to the halting problem you state.
Although generalized learning might be nice, until we know more about
learning, it might be more appropriate to select specific classes of
adaption which lend themselves to analysis and development of new
theories.

        As a simple example of a non halting problem learning automata,
the Purr Puss system developed by John Andreas (from New Zealand) does
an excellent job of learning without any such difficulty. Other such
systems exist as well, all you have to do is look for them. I guess the
point is that rather than pursue the impossible, find something
possible that may lead to the solution of a bigger problem and pursue
it with the passion and rigor worthy of the problem. An old saying:
'Problems worthy of attack prove their worth by fighting back'

                Fred

------------------------------

Date: Sat, 29 Oct 83 13:23:33 CDT
From: Bob.Warfield <warbob.rice@Rand-Relay>
Subject: Halting Problem Discussion

It turns out that any computer program running on a real piece of hardware
may be simulated by a deterministic finite automaton, since it only has a
finite (but very large) number of possible states. This is usually not a
productive observation to make, but it does present one solution to the
halting problem for real (i.e. finite) computing hardware. Simulate the
program in question as a DFA and look for loops. From this, one should
be able to tell what input to the DFA would produce an infinite loop,
and recognition of that input could be done by a smaller DFA (the old
one sans loops) that gets incorporated into the learning program. It
would run the DFA in parallel (or 1 step ahead?) and take action if a
dangerous situation appeared.

                                        Bob Warfield
                                        warbob@rice

------------------------------

Date: Mon 31 Oct 83 15:45:12-PST
From: Calton Pu <CALTON@WASHINGTON.ARPA>
Subject: Halting Problem: Resource Use

   From Shebs@Utah-20:

        The question is this: consider a learning program, or any
        program that is self-modifying in some way.  What must I do
        to prevent it from getting caught in an infinite loop, or a
        stack overflow, or other unpleasantnesses?  ...
        How can *it* know when it's stuck in a losing situation?

Trying to come up with a loop detector program seemed to find few enthusiasts.
The limited loop detector suggests another approach to the "halting problem".
The question above does not require the solution of the halting problem,
although that could help.   The question posed is one of resource allocation
and use.   Self-awareness is only necessary for the program to watch itself
and know whether it is making progress considering its resource consumption.
Consequently it is not surprising that:

        The best answers I saw were along the lines of an operating
        system design, where a stuck process can be killed, or
        pushed to the bottom of an agenda, or whatever.

However, Stan wants more:

        Workable, but unsatisfactory.  In the case of an infinite
        loop (that nastiest of possible errors), the program can
        only guess that it has created a situation where infinite
        loops can happen.

The real issue here is not whether the program is in loop, but whether the
program will be able to find a solution in feasible time.   Suppose a program
will take a thousand years to find a solution, will you let it run that long?
In other words, the problem is one of measuring gained progress versus
spent resources.   It may turn out that a program is not in loop but you
choose to write another program instead of letting the first run to completion.
Looping is just one of the losing situations.

Summarizing, the learning program should be allowed to see a losing situation
because it is unfeasible, whether the solution is possible or not.
From this view, there are two aspects to the decision: the measurement of
progress made by the program, and monitoring resource consumption.
It is the second aspect that involves some "operating systems design".
I would be interested to know whether your parser knows it is making progress.


                -Calton-

        Usenet: ...decvax!microsoft!uw-beaver!calton

------------------------------

Date: 31 Oct 83 2030 EST
From: Dave.Touretzky@CMU-CS-A
Subject: forwarded article


- - - - Begin forwarded message - - - -
  Date: 31 Oct 1983  18:41 EST (Mon)
  From: Daniel S. Weld <WELD%MIT-OZ@MIT-MC.ARPA>
  To:   macmol%MIT-OZ@MIT-MC.ARPA
  Subject: Molecular Computers

  Below is a forwarded message:
    From: David Rogers <DRogers at SUMEX-AIM.ARPA>

I have always been confused by the people who work on
"molecular computers", it seems so stupid. It seems much
more reasonable to consider the reverse application: using
computers to make better molecules.

Is anyone out there excited by this stuff?

                MOLECULAR  COMPUTERS  by  Lee  Dembart, LA Times
              (reprinted from the San Jose Mercury News 31 Oct 83)

SANTA MONICA - Scientists have dreamed for the past few years of
building a radically different kind of computer, one based on
molecular reactions rather than on silicon.

With such a machine, they could pack circuits much more tightly than
they can inside today's computers.  More important, a molecular
computer might not be bound by the rigid binary logic of conventional
computers.

Biological functions - the movement of information within a cell or
between cells - are the models for molecular computers. If that basic
process could be reproduced in a machine, it would be a very powerful
machine.

But such a machine is many, many years away.  Some say the idea is
science fiction.  At the moment, it exists only in the minds of of
several dozen computer scientists, biologists, chemists and engineers,
many of whom met here last week under the aegis of the Crump Institute
for Medical Engineering at the University of California at Los
Angeles.

"There are a number of ideas in place, a number of technologies in
place, but no concrete results," said Michael Conrad, a biologist and
computer scientist at Wayne State University in Detroit and a
co-organizer of the conference.

For all their strengths, today's digital computers have no ability to
judge.  They cannot recognize patterns. They cannot, for example,
distinguish one face from another, as even babies can.

A great deal of information can be packed on a computer chip, but it
pales by comparison to the contents of the brain of an ant, which can
protect itself against its environment.

If scientists had a computer with more flexible logic and circuitry,
they think they might be able to develop "a different style of
computing", one less rigid than current computers, one that works more
like a brain and less like a machine.  The "mood" of such a device
might affect the way scientists solve problems, just as people's moods
affect their work.

The computing molecules would be manufactured by genetically
engineered bacteria, which has given rise to the name "biochip" to
describe a network of them.

"This is really the new gene technology", Conrad said.

The conference was a meeting on the frontiers - some would say fringes
- of knowledge, and several times participants scoffed, saying that
the discussion was meandering into philosophy.

The meeting touched on some of the most fundamental questions of brain
and computer research, revealing how little is known of the mind's
mechanisms.

The goal of artificial intelligence work is to write programs that
simulate thought on digital computers. The meeting's goal was to think
about different kinds of computers that might do that better.

Among the questions posed at the conference:

- How do you get a computer to chuckle at a joke?

- What is the memory capacity of the brain? Is there a limit to that
capacity?

- Are there styles of problem solving that are not digitally
computable?

- Can computer science shed any light on the mechanisms of biological
science?  Can computer science problems be addressed by biological
science mechanisms?

Proponents of molecular computers argue that it is possible to make
such a machine because biological systems perform those processes all
the time.  Proponents of artificial intelligence have argued for years
that the existence of the brain is proof that it is possible to make a
small machine that thinks like a brain.

It is a powerful argument.  Biological systems already exist that
compute information in a better way than digital computers do. "There
has got to be inspiration growing out of biology", said F. Eugene
Yates, the Crump Institutes director.

Bacteria use sophisticated chemical processes to transfer information.
Can that process be copied?

Enzymes work by stereoscopically matching their molecules with other
molecules, a decision-making process that occurs thousands of times a
second.  It would take a binary computer weeks to make even one match.

"It's that failure to do a thing that an enzyme does 10,000 times a
second that makes us think there must be a better way," Yates said.

In the history of science, theoretical progress and technological
progress are intertwined.  One makes the other possible. It is not
surprising, therefore, that thinking about molecular computers has
been spurred recently by advances in chemistry and biotechnology that
seem to provide both the materials needed and a means for producing it
on a commercial scale.

"If you could design such a reaction, you could probably get a
bacteria to make it," Yates said.

Conrad thinks that a functioning machine is 50 years away, and he
described it as a "futuristic" development.
- - - - End forwarded message - - - -

------------------------------

End of AIList Digest
********************

∂03-Nov-83  1710	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #88
Received: from SRI-AI by SU-AI with TCP/SMTP; 3 Nov 83  17:10:10 PST
Date: Thursday, November 3, 1983 1:09PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #88
To: AIList@SRI-AI


AIList Digest            Thursday, 3 Nov 1983      Volume 1 : Issue 88

Today's Topics:
  Molecular Computers - Comment,
  Sequential Systems - Theoretical Sufficiency,
  Humanness - Definition, 
  Writing Analysis - Reference,
  Lab Report - Prolog and SYLLOG at IBM,
  Seminars - Translating LISP & Knowledge and Reasoning
----------------------------------------------------------------------

Date: 1 Nov 83 1844 EST
From: Dave.Touretzky@CMU-CS-A
Subject: Comment on Molecular Computers


- - - - Begin forwarded message - - - -
Date: Tue, 1 Nov 1983  12:19 EST
From: DANNY%MIT-OZ@MIT-MC.ARPA
To:   Daniel S. Weld <WELD%MIT-OZ@MIT-MC.ARPA>
Subject: Molecular Computers

I was at the Molecular Computer conference. Unfortunately, there has
very lttle progress since the Molecular Electronics conference a year
ago. The field is too full of people who think analog computation is
"more powerful" and who think that Goedel's proof shows that people
can always think better than machine. Sigh.
--danny

------------------------------

Date: Thursday, 3 November 1983 13:27:10 EST
From: Robert.Frederking@CMU-CS-CAD
Subject: Parallel vs. Sequential

Re: Phillip Kahn's claim that "not ALL parallel computations can be made
sequential": I don't believe it, unless you are talking about infinitely
many processing elements.  The Turing Machine is the most powerful model of
computation known, and it is inherently serial (and equivalent to a
Tesselation Automaton, which is totally parallel).  Any computation that
requires all the values at an "instant" can simply run at N times the
sampling rate of your sensors: it locks them, reads each one, and makes its
decisions after looking at all of them, and then unlocks them to examine the
next time slice.  If one is talking practically, this might not be possible
due to speed considerations, but theoretically it is possible.  So while at
a theoretical level ALL parallel computations can be simulated sequentially,
in practice one often requires parallelism to cope with real-world speeds.

------------------------------

Date: 2 Nov 83 10:52:22 PST (Wednesday)
From: Hoffman.es@PARC-MAXC.ARPA
Subject: Awareness, Human-ness


Sorry it took me a while to track this down.  It's something I recalled
when reading the discussion of awareness in V1 #80.  It's been lightly
edited.

--Rodney Hoffman

**** **** **** **** **** **** **** ****

From Richard Rorty's book, "Philosophy and The Mirror of Nature":

Personhood is a matter of decision rather than knowledge, an acceptance
of another being into fellowship rather than a recognition of a common
essence.

Knowledge of what pain is like or what red is like is attributed to
beings on the basis of their potential membership in the community.
Thus babies and the more attractive sorts of animal are credited with
"having feelings" rather than  (like machines or spiders) "merely
responding to stimuli."  To say that babies know what heat is like, but
not what the motion of molecules is like is just to say that we can
fairly readily imagine them opening their mouths and remarking on the
former, but not the latter.  To say that a gadget that says "red"
appropriately *doesn't* know what red is like is to say that we cannot
readily imagine continuing a conversation with the gadget.

Attribution of pre-linguistic awareness is merely a courtesy extended to
potential or imagined fellow-speakers of our language.  Moral
prohibitions against hurting babies and the better looking sorts of
animals are not based on their possessions of feeling.  It is, if
anything, the other way around.  Rationality about denying civil rights
to morons or fetuses or robots or aliens or blacks or gays or trees is a
myth.  The emotions we have toward borderline cases depend on the
liveliness of our imagination, and conversely.

------------------------------

Date: 1 November 1983 18:55 EDT
From: Herb Lin <LIN @ MIT-ML>
Subject: writing analysis

You might want to take a look at some of the stuff by R. Flesch
who is the primary exponent of a system that takes word and sentence
and paragraph lengths and turns it into grade-equivalent reading
scores.  It's somewhat controversial.

[E.g., The Art of Readable Writing.  Or, "A New Readability Index",
J. of Applied Psychology, 1948, 32, 221-233.  References to other
authors are also given in Cherry and Vesterman's writeup of the
STYLE and DICTION systems included in Berkeley Unix.  -- KIL]

------------------------------

Date: Monday, 31-Oct-83  11:49:55-GMT
From: Bundy HPS (on ERCC DEC-10) <Bundy@EDXA>
Subject: Prolog and SYLLOG at IBM

                 [Reprinted from the Prolog Digest.]


    Date: 9 Oct 1983 11:43:51-PDT (Sunday)
    From: Adrian Walker <ADRIAN.IBM@Rand-Relay>
    Subject: Prolog question


                                   IBM Research Laboratory K51
                                   5600 Cottle Road
                                   San Jose
                                   CA 95193 USA

                                   Telephone:    408-256-6999
                                   ARPANet: Adrian.IBM@Rand-Relay

                                   10th October 83


Alan,

In answer to your question about Prolog implementations, we
do most of our work using the Waterloo Prolog 1.3 interpreter
on an IBM mainframe (3081).  Although not a traditional AI
environment, this turns out to be pretty good.  For instance,
the speed of the Interpreter turns out to be about the same
as that of compiled DEC-10 Prolog (running on a DEC-10).

As for environment, the system delivered by Waterloo is
pretty much stand alone, but there are several good environments
built in Prolog on top of it.

A valuable feature of Waterloo Prolog 1.3 is a 'system' predicate,
which can call anything on the system, E.g.  a full screen editor.

The work on extracting explanations of 'yes' and 'no' answers
from Prolog, which I reported at IJCAI, was done in Waterloo
Prolog.  We have also implemented a syllogistic system called
SYLLOG, and several expert system types of applications.  An
English language question answerer written by Antonio Porto and
me, produces instantaneous answers, even when the 3081 has 250
users.

As far as I know, Waterloo Prolog only runs under the VM operating
system (not yet under MVS, the other major IBM OS for mainframes).
It is available, for a moderate academic licence fee, from Sandra
Ward, Department of Computing Services, University of Waterloo,
Waterloo, Ontario, Canada.

We use it with IBM 3279 colour terminals, which adds variety to a
long day at the screen, and can also be useful !

Best wishes,

-- Adrian Walker

Walker, A. (1981). 'SYLLOG: A Knowledge Based Data Management
System,' Report No. 034. Computer Science Department, New York
University, New York.

Walker, A. (1982). 'Automatic Generation of Explanations of
Results from Knowledge Bases,' RJ3481. Computer Science
Department, IBM Research Laboratory, San Jose, California.

Walker, A. (1983a). 'Data Bases, Expert Systems, and PROLOG,'
RJ3870. Computer Science Department, IBM Research Laboratory,
San Jose, California. (To appear as a book chapter)

Walker, A. (1983b). 'Syllog: An Approach to Prolog for
Non-Programmers.' RJ3950, IBM Research Laboratory, San Jose,
Cal1fornia. (To appear as a book chapter)

Walker, A. (1983c). 'Prolog/EX1: An Inference Engine which
Explains both Yes and No Answers.'
RJ3771, IBM Research Laboratory, San Jose, Calofornia.
(Proc. IJCAI 83)

Walker, A. and Porto, A. (1983). 'KBO1, A Knowledge Based
Garden Store Assistant.'
RJ3928, IBM Research Laboratory, San Jose, California.
(In Proc Portugal Workshop, 1983.)

------------------------------

Date: Mon 31 Oct 83 22:57:03-CST
From: John Hartman <CS.HARTMAN@UTEXAS-20.ARPA>
Subject: Fri. Grad Lunch - Understanding and Translating LISP

                [Reprinted from the UTEXAS-20 bboard.]

GRADUATE BROWN BAG LUNCH - Friday 11/4/83, PAI 5.60 at noon:

I will talk about how programming knowledge contributes to
understanding programs and translating between high level languages.
The problems of translating between LISP and MIRROR (= HLAMBDA) will
be introduced.  Then we'll look at the translation of A* (Best First
Search) and see some examples of how recognizing programming cliches
contributes to the result.

I'll try to keep it fairly short with the hope of getting critical
questions and discussion.


Old blurb:
I am investigating how a library of standard programming constructs
may be used to assist understanding and translating LISP programs.
A programmer reads a program differently than a compiler because she
has knowledge about computational concepts such as "fail/succeed loop"
and can recognize them by knowing standard implementations.  This
recognition benefits program reasoning by creating useful abstractions and
connections between program syntax and the domain.

The value of cliche recognition is being tested for the problem of
high level translation.  Rich and Temin's MIRROR language is designed
to give a very explicit, static expression of program information
useful for automatically answering questions about the program.  I am
building an advisor for LISP to MIRROR translation which will exploit
recognition to extract implicit program information and guide
transformation.

------------------------------

Date: Wed, 2 Nov 83 09:17 PST
From: Moshe Vardi <vardi@Diablo>
Subject: Knowledge Seminar

               [Forwarded by Yoni Malachi <YM@SU-AI>.]

We are planning to start at IBM San Jose a research seminar on
theoretical aspects of reasoning about knowledge, such as reasoning
with incomplete information, reasoning in the presence of
inconsistencies, and reasoning about changes of belief.  The first few
meetings are intended to be introductory lectures on various attempts
at formalizing the problem, such as modal logic, nonmonotonic logic,
and relevance logic.  There is a lack of good research in this area,
and the hope is that after a few introductory lectures, the format of
the meetings will shift into a more research-oriented style.  The
first meeting is tentatively scheduled for Friday, Nov. 18, at 1:30,
with future meetings also to be held on Friday afternoon, but this may
change if there are a lot of conflicts.  The first meeting will be
partly organizational in nature, but there will also be a talk by Joe
Halpern on "Applying modal logic to reason about knowledge and
likelihood".

For further details contact:

Joe Halpern [halpern.ibm-sj@rand-relay, (408) 256-4701]
Yoram Moses [yom@sail, (415) 497-1517]
Moshe Vardi [vardi@su-hnv, (408) 256-4936]


    03-Nov-83  0016     MYV     Knowledge Seminar
    We may have a problem with Nov. 18. The response from Stanford to the
    announcement is overwhelming, but have a room only for 25 people.
    We may have to postpone the seminar.


To be added to the mailing list contact Moshe Vardi (MYV@sail,vardi@su-hnv)

------------------------------

End of AIList Digest
********************

∂04-Nov-83  0029	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #89
Received: from SRI-AI by SU-AI with TCP/SMTP; 4 Nov 83  00:28:07 PST
Date: Thursday, November 3, 1983 4:59PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #89
To: AIList@SRI-AI


AIList Digest             Friday, 4 Nov 1983       Volume 1 : Issue 89

Today's Topics:
  Intelligence - Definition & Measurement & Necessity for Definition
----------------------------------------------------------------------

Date: Tue, 1 Nov 83 13:39:24 PST
From: Philip Kahn <v.kahn@UCLA-LOCUS>
Subject: Definition of Intelligence

        When it comes down to it, isn't intelligence the ability to
recognize space-time relationships?  The nice thing about this definition
is that it recognizes that ants, programs, and humans all possess
varying degrees of intelligence (that is, varying degrees in their
ability to recognize space-time relationships).  This implies that
intelligence is only correlative, and only indirectly related to
physical environmental interaction.

------------------------------

Date: Tue, 1 Nov 1983  22:22 EST
From: SLOAN%MIT-OZ@MIT-MC.ARPA
Subject: Slow intelligence/chess

        ... Suppose you played chess at strength 2000 given 5 seconds
        per move, 2010 given 5 minutes, and 2050 given as much time as
        you desired...

An excellent point.  Unfortunately wrong.  This is a common error,
made primarily by 1500 players and promoters of chess toys.  Chess
ratings measure PERFORMANCE at TOURNAMENT TIME CONTROLS (generally
ranging between 1.5 to 3 moves per minute).  To speak of "strength
2000 at 5 seconds per move" or "2500 given as much time as desired" is
absolutely meaningless.  That is why there are two domestic rating
systems, one for over-the-board play and another for postal chess.
Both involve time limits, the limits are very different, and the
ratings are not comparable.  There is probably some correlation,  but
the set of skills involved are incomparable.
  This is entirely in keeping with the view that intelligence is
coupled with the environment, and involves a speed factor (you must
respond in "real-time" - whatever that happens to mean.)  It also
speaks to the question of "loop-avoidance": in the real world, you
can't step in the same stream twice; you must muddle through, ready or
not.
  To me, this suggests that all intelligent behavior consists of
generating crude, but feasible solutions to problems very quickly (so
as to be ready with a response) and then incrementally improving the
solution as time permits.  In an ever changing environment, it is
better to respond inadequately than to ponder moot points.
-Ken Sloan

------------------------------

Date: Tue, 1 Nov 1983 10:15:54 EST
From: AXLER.Upenn-1100@Rand-Relay (David M. Axler - MSCF Applications
      Mgr.)
Subject: Turing Test Re-visited

     I see that the Turing Test has (not unexpectedly) crept back into the
discussions of intelligence (1:85).  I've wondered a bit as to whether the
TT shouldn't be extended a bit; to wit, the challenge it poses should not only
include the ability to "pass" the test, but also the ability to act as a judge
for the test.  Examining the latter should give us all sorts of clues as to
what preconceived notions we're imposing when we try to develop a machine or
program that satisfies only Turing's original problem

Dave Axler

------------------------------

Date: Wed, 2 Nov 1983  10:10 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Parallelism & Consciousness


What I meant is that defining intelligence seems as pointless as
defining "life" and then arguing whether viruses are alive instead of
asking how they work and solve the problems that appear to us to be
the interesting ones.  Instead of defining so hard, one should look to
see what there is.

For example, about the loop-detecting thing, it is clear that in full
generality one can't detect all Turing machine loops.  But we all know
intelligent people who appear to be caught, to some extent, in thought
patterns that appear rather looplike.  That paper of mine on jokes
proposes that to be intelligent enough to keep out of simple loops,
the problem is solved by a variety of heuristic loop detectors, etc.
Of course, this will often deflect one from behaviors that aren't
loops and which might lead to something good if pursued.  That's life.


I guess my complaint is that I think it is unproductive to be so
concerned with defining "intelligence" to the point that you even
discuss whether "it" is time-scale invariant, rather than, say, how
many computrons it takes to solve some class of problems.  We want to
understand problem-solvers, all right.  But I think that the word
"intelligence" is a social one that accumulates all sorts of things
that one person admires when observed in others and doesn't understand
how to do.  No doubt, this can be narrowed down, with great effort,
e.g., by excluding physical; skills (probably wrongly, in a sense) and
so forth.  But it seemed to me that the discussion here in AILIST was
going nowwhere toward understand intelligence, even in that sense.

In other words, it seems strange to me that there is no public
discussion of substantive issues in the field...

------------------------------

Date: Wed, 2 Nov 1983  10:21 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Intelligence and Competition


   The ability to cope with  a CHANGE
    in  the environment marks  intelligence.


See, this is what's usually called adaptiveness.  This is why you
don't get anywhere defining intelligence -- until you have a clear idea
to define.  Why be enslaved to the fact that people use a word, unless
you're sure it isn't a social accumulation.

------------------------------

Date: 2 Nov 1983 23:44-PST
From: ISAACSON@USC-ISI
Subject: Re: Parallelism & Consciousness


From Minsky:

    ...I think that the word "intelligence" is a social one
    accumulates all sorts of things that one person
    admires observed in others and doesn't understand how to
    do...

    In other words, it seems strange to me that there
    is no public discussion of substantive issues in the
    field...


Exactly...  I agree on both counts.  My purpose is to help
crystallize a few basic topics, worthy of serious discussion, that
relate to those elusive epiphenomena that we tend to lump under
that loose characterization: "Intelligence".  I read both your LM
and Jokes papers and consider them seminal in that general
direction.  I think, though, that your ideas there need, and
certainly deserve, further elucidation.  In fact, I was hoping
that you would be willing to state some of your key points to
this audience.


More than this.  Recently I've been attracted to Doug
Hofstadter's ideas on subcognition and think that attention
should be paid to them as well.  As a matter of fact, I see
certain affinities between you two and would like to see a good
discussion that centers on LM, Jokes, and Subcognition as
Computation.  I think that, in combination, some of the most
promising ideas for AI are awaiting full germination in those
papers.

------------------------------

Date: Thu, 3 Nov 1983  13:17 EST
From: BATALI%MIT-OZ@MIT-MC.ARPA
Subject: Inscrutable Intelligence

    From Minsky:

    ...I think that the word "intelligence" is a social one
    that accumulates all sorts of things that one person
    admires when observed in others and doesn't understand how to
    do...

This seems like an extremely negative and defeatist thing to say.
What does it leave us in AI to do, but to ignore the very notion we
are supposedly trying to understand?  What will motivate one line of
research rather than another, what can we use to judge the quality of
a piece of research, if we have no idea what it is we are after?

It seems to me that one plausible approach to AI is to present an
arguable account of what intelligence is about, and then to show that
some mechanism is intelligent according to that account.  The account,
the "definition", of intelligence may not be intuitive to everyone at
first.  But the performance of the mechanisms constructed in accord
with the account will constitute evidence that the account is correct.
(This is where the Turing test comes in, not as a definition of
intelligence, but as evidence for its presence.)

------------------------------

Date: Tue 1 Nov 83 13:10:32-EST
From: SUNDAR@MIT-OZ
Subject: parallelism and conciousness

                 [Forwarded by RickL%MIT-OZ@MIT-MC.]

     [...]

     It seems evident from the recent conversations that the meaning of
intelligence is much more than mere 'survivability' or 'adaptability'.
Almost all the views expressed however took for granted the concept of
"time"-which,seems to me is 'a priori'(in the Kantian sense).

What do you think of a view of that says :intelligence is the ability of
an organism that enables it to preserve,propagate and manipulate these
'a priori'concepts.
The motivation for doing so could be a simple pleasure,pain mechanism
(which again  I feel are concepts not adequately understood).It would
seem that while the pain mechanism would help cut down large search
spaces when the organism comes up against such problems,the pleasure
mechanism would help in learning,and in the acquisition of new 'a priori'
wisdom.
Clearly in the case of organisms that multiply by fission (where the line
of division between parent and child is not exactly clear)the structure
of the organism may be preserved .In such cases it would seem that the
organism survives seemingly forever . However it would not be considered
intelligent by the definition proposed above .
The questions that seem interesting to me therefore are:
1 How do humans acquire the concept of 'time'?
2 'Change' seem to be measured in terms of time (adaptation,survival etc
are all the presence or absense of change) but 'time' itself seems to be
meaningless without 'change'!
3 How do humans decide that an organism is 'intelligent ' or not?
Seems to me that most of the people in the AIList made judgements (the
amoeba , desert tortoise, cockroach examples )which should mean that
they either knew what intelligence was or wasn't-but it still isn't
exactly clear after all the smoke's cleared.

    Any comments on the above ideas? As a relative novice to the field
of AI I'd appreciate your opinions.

Thanks.

--Sundar--

------------------------------

Date: Thu, 3 Nov 1983  16:42 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Inscrutable Intelligence


Sure.  I agree you want an account of what intelligence is "about".
When I complained about making a "definition" I meant
one of those useless compact thingies in dictionaries.

But I don't agree that you need this for scientific motivation.
Batali: do you really think Biologists need definitions of Life
for such purposes?

Finally, I simply don't think this is a compact phenomenon.
Any such "account", if brief, will be very partial and incomplete.
To expect a test to show that "the account is correct" depends
on the nature of the partial theory.  In a nutshell, I still
don't see any use at all for
such definition, and it will lead to calling all sorts of
partial things "intelligence".  The kinds of accounts to confirm
are things like partial theories that need their own names, like

   heuristic search method
   credit-assignment scheme
   knowledge-representation scheme, etc.

As in biology, we simply are much too far along to be so childish as
to say "this program is intelligent" and "this one is not".  How often
do you see a biologist do an experiment and then announce "See, this
is the secret of Life".  No.  He says, "this shows that enzyme
FOO is involved in degrading substrate BAR".

------------------------------

Date: 3 Nov 1983 14:45-PST
From: ISAACSON@USC-ISI
Subject: Re: Inscrutable Intelligence


I think that your message was really addressed to Minsky, who
already replied.

I also think that the most one can hope for are confirmations of
"partial theories" relating, respectively, to various aspects
underlying phenomena of "intelligence".  Note that I say
"phenomena" (plural).  Namely, we may have on our hands a broad
spectrum of "intelligences", each one of which the manifestation
of somewhat *different* mix of underlying ingredients.  In fact,
for some time now I feel that AI should really stand for the
study of Artificial Intelligences (plural) and not merely
Artificial Intelligence (singular).

------------------------------

Date: Thu, 3 Nov 1983  19:29 EST
From: BATALI%MIT-OZ@MIT-MC.ARPA
Subject: Inscrutable Intelligence

    From: MINSKY%MIT-OZ at MIT-MC.ARPA

    do you really think Biologists need definitions of Life
    for such purposes?

No, but if anyone was were claiming to be building "Artificial Life",
that person WOULD need some way to evaluate research.  Remember, we're
not just trying to find out things about intelligence, we're not just
trying to see what it does -- like the biochemist who discovers enzyme
FOO -- we're trying to BUILD intelligences.  And that means that we
must have some relatively precise notion of what we're trying to build.

    Finally, I simply don't think this is a compact phenomenon.
    Any such "account", if brief, will be very partial and incomplete.
    To expect a test to show that "the account is correct" depends
    on the nature of the partial theory.  In a nutshell, I still
    don't see any use at all for
    such definition, and it will lead to calling all sorts of
    partial things "intelligence".

If the account is partial and incomplete, and leads to calling partial
things intelligence, then the account must be improved or rejected.
I'm not claiming that an account must be short, just that we need
one.

    The kinds of accounts to confirm
    are things like partial theories that need their own names, like

       heuristic search method
       credit-assignment scheme
       knowledge-representation scheme, etc.


But why are these thing interesting?  Why is heuristic search better
than "blind" search?  Why need we assign credit?  Etc?  My answer:
because such things are the "right" thing to do for a program to be
intelligent.  This answer appeals to a pre-theoretic conception of
what intelligence is.   A more precise notion would help us
assess the relevance of these and other methods to AI.

One potential reason to make a more precise "definition" of
intelligence is that such a definition might actually be useful in
making a program intelligent.  If we could say "do that" to a program
while pointing to the definition, and if it "did that", we would have
an intelligent program.  But I am far too optimistic.  (Perhaps
"childishly" so).

------------------------------

End of AIList Digest
********************

∂05-Nov-83  0107	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #90
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Nov 83  01:06:57 PST
Date: Friday, November 4, 1983 9:43PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #90
To: AIList@SRI-AI


AIList Digest            Saturday, 5 Nov 1983      Volume 1 : Issue 90

Today's Topics:
  Intelligence,
  Looping Problem
----------------------------------------------------------------------

Date: Thu, 3 Nov 1983  23:46 EST
From: MINSKY%MIT-OZ@MIT-MC.ARPA
Subject: Inscrutable Intelligence


     One potential reason to make a more precise "definition" of
     intelligence is that such a definition might actually be useful
     in making a program intelligent.  If we could say "do that" to a
     program while pointing to the definition, and if it "did that",
     we would have an intelligent program.  But I am far too
     optimistic.

I think so.  You keep repeating how good it would be to have a good
definition of intelligence and I keep saying it would be as useless as
the biologists' search for the definition of "life".  Evidently
we're talking past each other so it's time to quit.

Last word: my reason for making the argument was that I have seen
absolutely no shred of good ideas in this forum, apparently because of
this definitional orientation.  I admit the possibility that some
good mathematical insight could emerge from such discussions.  But
I am personally sure it won't, in this particular area.

------------------------------

Date: Friday, 4 November 1983, 01:17-EST
From: jcma@MIT-MC
Subject: Inscrutable Intelligence

                          [Reply to Minsky.]


BOTTOM LINE:  Have you heard of OPERATIONAL DEFINITIONS?

You are correct in pointing out that we need not have the ultimate definition
of intelligence.  But, it certainly seems useful for the practical purposes of
investigating the phenomena of intelligence (whether natural or artificial) to
have at least an initial approximation, an operational definition.

Some people, (e.g., Winston), have proposed "people-like behavior" as their
operational definition for intelligence.  Perhaps you can suggest an
incremental improvement over that rather vague definition.

If artficial intelligence can't come up with an operational definition of
intellgence, no matter how crude, it tends to undermine the credibility of the
discipline and encourage the view that AI researchers are flakey.  Moreover,
it makes it very difficult to determine the degree to which a program exhibits
"intelligence."

If you were being asked to spend $millions on a field of inquiry, wouldn't you
find it strange (bordering on absurd) that the principle proponents couldn't
render an operational definition of the object of investigation?

p.s.  I can't imagine that psychology has no operational definition of
intelligence (in fact, what is it?).  So, if worst comes to worst, AI can just
borrow psychology's definition and improve on it.

------------------------------

Date: Fri, 4 Nov 1983  09:57 EST
From: Dan Carnese <DJC%MIT-OZ@MIT-MC.ARPA>
Subject: Inscrutable Intelligence

There's a wonderful quote from Wittgenstein that goes something like:

  One of the most fundamental sources of philosophical bewilderment is to have
  a substantive but be unable to find the thing that corresponds to it.

Perhaps the conclusion from all this is that AI is an unfortunate name for the
enterprise, since no clear definitions for I are available.  That shouldn't
make it seem any less flakey than, say, "operations research" or "management
science" or "industrial engineering" etc. etc.  People outside a research area
care little what it is called; what it has done and is likely to do is
paramount.

Trying to find the ultimate definition for field-naming terms is a wonderful,
stimulating philosophical enterprise.  However, one can make an empirical
argument that this activity has little impact on technical progress.

------------------------------

Date: 4 Nov 1983 8:01-PST
From: fc%usc-cse%USC-ECL@SRI-NIC
Subject: Re: AIList Digest   V1 #89

        This discussion on intelligence is starting to get very boring.
I think if you want a theoretical basis, you are going to have to
forget about defining intelligence and work on a higher level. Perhaps
finding representational schemes to represent intelligence would be a
more productive line of pursuit. There are such schemes in existence.
As far as I can tell, the people in this discussion have either scorned
them, or have never seen them. Perhaps you should go to the library for
a while and look at what all the great philosophers have said about the
nature of intelligence rather than rehashing all of their arguments in
a light and incomplete manner.
                        Fred

------------------------------

Date: 3 Nov 83 0:46:16-PST (Thu)
From: hplabs!hp-pcd!orstcs!hakanson @ Ucb-Vax
Subject: Re: Parallelism & Consciousness - (nf)
Article-I.D.: hp-pcd.2284


No, no, no.  I understood the point as meaning that the faster intelligence
is merely MORE intelligent than the slower intelligence.  Who's to say that
an amoeba is not intelligent?  It might be.  But we certainly can agree that
most of us are more intelligent than an amoeba, probably because we are
"faster" and can react more quickly to our environment.  And some super-fast
intelligent machine coming along does NOT make us UNintelligent, it just
makes it more intelligent than we are.  (According to the previous view
that faster = more intelligent, which I don't necessarily subscribe to.)

Marion Hakanson         {hp-pcd,teklabs}!orstcs!hakanson        (Usenet)
                        hakanson@{oregon-state,orstcs}          (CSnet)

------------------------------

Date: 31 Oct 83 13:18:58-PST (Mon)
From: decvax!duke!unc!mcnc!ecsvax!unbent @ Ucb-Vax
Subject: re: transcendental recursion [& reply]
Article-I.D.: ecsvax.1457

i'm also new on this net, but this item seemed like
a good one to get my feet wet with.
     if we're going to pursue the topic of consciousness
vs intelligence, i think it's important not to get
confused about consciousness vs *self*-consciousness at
the beginning.  there's a perfectly clear sense in which
any *sentient* being is "conscious"--i.e., conscious *of*
changes in its environment.  but i have yet to see any
good reason for supposing that cats, rats, bats, etc.
are *self*-conscious, e.g., conscious of their own
states of consciousness.  "introspective" or "self-
monitoring" capacity goes along with self-consciousness,
but i see no particular reason to suppose that it has
anything special to do with *consciousness* per se.
     as long as i'm sticking my neck out, let me throw
in a cautionary note about confusing intelligence and
adaptability.  cockroaches are as adaptable as all get
out, but not terribly intelligent; and we all know some
very intelligent folks who can't adapt to novelties at
all.
                      --jay rosenberg (escvax!unbent)

[I can't go along with the cockroach claim.  They are a
successful species, but probably haven't changed much in
millions of years.  Individual cockroaches are elusive,
but can they solve mazes or learn tricks?  As for the
"intelligent folks":  I previously stated my preference
for power tests over timed aptitude tests -- I happen to
be rather slow to change channels myself.  If these people
are unable to adapt even given time, on what basis can we
say that they are intelligent?  If they excel in particular
areas (e.g. idiot savants), we can qualify them as intelligent
within those specialties, just as we reduce our expectations
for symbolic algebra programs.  If they reached states of
high competence through early learning, then lost the ability
to learn or adapt further, I will only grant that they >>were<<
intelligent.  -- KIL]

------------------------------

Date: 3 Nov 83 0:46:00-PST (Thu)
From: hplabs!hp-pcd!orstcs!hakanson @ Ucb-Vax
Subject: Re: Semi-Summary of Halting Problem Disc [& Comment]


A couple weeks ago, I heard Marvin Minsky speak up at Seattle.  Among other
things, he discussed this kind of "loop detection" in an AI program.  He
mentioned that he has a paper just being published, which he calls his
"Joke Paper," which discusses the applications of humor to AI.  According
to Minsky, humor will be a necessary part of any intelligent system.

If I understood correctly, he believes that there is (will be) a kind
of a "censor" which recognizes "bad situations" that the intelligent
entity has gotten itself into.  This censor can then learn to recognize
the precursors of this bad situation if it starts to occur again, and
can intervene.  This then is the reason why a joke isn't funny if you've
heard it before.  And it is funny the first time because it's "absurd,"
the laughter being a kind of alarm mechanism.

Naturally, this doesn't really help with a particular implementation,
but I believe that I agree with the intuitions presented.  It seems to
agree with the way I believe *I* think, anyway.

I hope I haven't misrepresented Minsky's ideas, and to be sure, you should
look for his paper.  I don't recall him mentioning a title or publisher,
but he did say that the only reference he could find on humor was a book
by Freud, called "Jokes and the Unconscious."

(Gee, I hope his talk wasn't all a joke....)

Marion Hakanson         {hp-pcd,teklabs}!orstcs!hakanson        (Usenet)
                        hakanson@{oregon-state,orstcs}          (CSnet)


[Minsky has previously mentioned this paper in AIList.  You can get
a copy by writing to Minsky%MIT-OZ@MIT-MC.  -- KIL]

------------------------------

Date: 31 Oct 83 7:52:43-PST (Mon)
From: hplabs!hao!seismo!ut-sally!ut-ngp!utastro!nather @ Ucb-Vax
Subject: Re: The Halting Problem
Article-I.D.: utastro.766

A common characteristic of humans that is not shared by the machines
we build and the programs we write is called "boredom."  All of us get
bored running around the same loop again and again, especially if nothing
is seen to change in the process.  We get bored and quit.

         *--->    WARNING!!!   <---*

If we teach our programs to get bored, we will have solved the
infinite-looping problem, but we will lose our electronic slaves who now
work, uncomplainingly, on the same tedious jobs day in and day out.  I'm
not sure it's worth the price.

                                    Ed Nather
                             ihnp4!{kpno, ut-sally}!utastro!nather

------------------------------

Date: 31 Oct 83 20:03:21-PST (Mon)
From: harpo!eagle!hou5h!hou5g!hou5f!hou5e!hou5d!mat @ Ucb-Vax
Subject: Re: The Halting Problem
Article-I.D.: hou5d.725

    If we teach our programs to get bored, we will have solved the
    infinite-looping problem, but we will lose our electronic slaves who now
    work, uncomplainingly, on the same tedious jobs day in and day out.  I'm
    not sure it's worth the price.

Hmm.  I don't usually try to play in this league, but it seems to me that there
is a place for everything and every talent.  Build one machine that gets bored
(in a controlled way, please) to work on Fermat's last Theorem.  Build another
that doesn't to check tolerances on camshafts or weld hulls.  This [solving
the looping problem] isn't like destroying one's virginity, you know.

                                                Mark Terribile
                                                Duke Of deNet

------------------------------

End of AIList Digest
********************

∂07-Nov-83  0920	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #91
Received: from SRI-AI by SU-AI with TCP/SMTP; 7 Nov 83  09:19:23 PST
Date: Sunday, November 6, 1983 10:51PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #91
To: AIList@SRI-AI


AIList Digest             Monday, 7 Nov 1983       Volume 1 : Issue 91

Today's Topics:
  Parallelism,
  Turing Machines
----------------------------------------------------------------------

Date: 1 Nov 83 22:39:06-PST (Tue)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!israel @ Ucb-Vax
Subject: Re: Parallelism and Conciousness
Article-I.D.: umcp-cs.3498

[Initial portion missing. -- KIL]

a processing unit that we can currently build.  If you mean 'at the
exact same time', then I defy you to show me a case where this is
necessary.

The statement "No algorithm is inherently parallel", just means that
the algoritm itself (as opposed to the engineering of putting it
into practice) does not necessarily have to be done in parallel.
Any parallel algorithm that you give me, I can write a sequential
algorithm that does the same thing.

Now, if you assume a finite number of processors for the parallel
algorithm, then the question of whether the sequential algorithm will
work under time constraints is dependent on the speed of the
processor worked on.  I don't know if there has been any work
done on theoretical limits of the speed of a processor (Does
anyone know? is this a meaningful question?), but if we assume
none (a very chancy assumption at best), then any parallel algorithm
can be done sequentially in practice.

If you allow an infinite number of processors for the parallel
algorithm, then the sequential version of the algorithm can't
ever work in practice.  But can the parallel version?  What
do we run it on?  Can you picture an infinitely parallel
computer which has robots with shovels with it, and when the
computer needs an unallocated processor and has none, then
the robots dig up the appropriate minerals and construct
the processor.  Of course, it doesn't need to be said that
if the system notices that the demand for processors is
faster than the robots' processor production output, then
the robots make more robots to help them with the raw materials
gathering and the construction.  :-)
--

↑-↑ Bruce ↑-↑

University of Maryland, Computer Science
{rlgvax,seismo}!umcp-cs!israel (Usenet)    israel.umcp-cs@CSNet-Relay (Arpanet)

------------------------------

Date: 31 Oct 83 19:55:44-PST (Mon)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: Parallelism and Conciousness - (nf)
Article-I.D.: uiucdcs.3572


I see no reason why consciousness should be inherently parallel.  But
it turns out that the only examples of conscious entities (i.e. those
which nearly everyone agrees are conscious) rely heavily on parallelism
at several levels.  This is NOT to say that they derive their
consciousness from parallelism, only that there is a high corelation
between the two.

There are good reasons why natural selection would favor parallelism.
Besides the usually cited ones (e.g. speed, simplicity) is the fact
that the world goes by very quickly, and carries a high information
content.  That makes it desirable and advantageous for a conscious
entity to be aware of several things at once.  This strongly suggests
parallelism (although a truly original species might get away with
timesharing).

Pushing in the other direction, I should note that it is not necessary
to bring the full power of the human intellect to bear against ALL of
our environment at once.  Hence the phenomenon of attention.  It
suffices to have weaker processes in charge of uninteresting phenomena
in the environment, as long as these have the ability to enlist more of
the organism's information processing power when the situation becomes
interesting enough to demand it.  (This too could be finessed with a
clever timesharing scheme, but I know of no animal that does it that
way.)

Once again, none of this entails a connection causal connection between
parallelism and consciousness.  It just seems to have worked out that
nature liked it that way (in the possible world in which we live).

Rick Dinitz
...!uiucdcs!uicsl!dinitz

------------------------------

Date: 1 Nov 83 11:53:58-PST (Tue)
From: hplabs!hao!seismo!rochester!blenko @ Ucb-Vax
Subject: Re:  Parallelism & Consciousness
Article-I.D.: rocheste.3648

Interesting to see this discussion taking place among people
(apparently) committed to an information-processing model for
intelligence.

I would be satisfied with the discovery of mechanisms that duplicate
the information-processing functions associated with intelligence.

The issue of real-time performance seems to be independent of
functional performance (not from an engineering point of view, of
course; ever tell one of your hardware friends to "just turn up the
clock"?).  The fact that evolutionary processes act on both the
information-processing and performance characteristics of a system may
argue for the (evolutionary) superiority of one mechanism over another;
it does not provide prescriptive information for developing functional
mechanisms, however, which is the task we are currently faced with.

        Tom

------------------------------

Date: 1 Nov 83 19:01:59-PST (Tue)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!speaker @ Ucb-Vax
Subject: Re: Parallelism and Conciousness
Article-I.D.: umcp-cs.3523

                No algorithm is inherently parallel.

        The algorithms you are thinking about occur in the serial world of
        the Turing machine.  Turing machines, remember, have only only one
        input.  Consider what happens to your general purpose turing machine
        when it must compute on more than one input and simultaneously!

        So existence in the real world may require parallelism.


    How do you define simultaneously?  If you mean within a very short
    period of time, then that requirement is based on the maximum speed of
    a processing unit that we can currently build.  If you mean 'at the
    exact same time', then I defy you to show me a case where this is
    necessary.

A CHALLENGE!!!  Grrrrrrrr......

Okay, let's say we have two discrete inputs that must
be monitored by a Turing machine.  Signals may come in
over these inputs simultaneously.  How do you propose
to monitor both discretes at the same time?  You can't
monitor them as one input because your Turing machine
is allowed only one state at a time on its read/write head.
Remember that the states of the inputs run as fast as
those of the Turing machine.


You can solve this problem by building two Turing machines,
each of which may look at the discretes.

I don't have to appeal to practical speeds of processors.
We're talking pure theory here.
--

                                        - Speaker-To-Stuffed-Animals
                                        speaker@umcp-cs
                                        speaker.umcp-cs@CSnet-Relay

------------------------------

Date: 1 Nov 83 18:41:10-PST (Tue)
From: hplabs!hao!seismo!rlgvax!cvl!umcp-cs!speaker @ Ucb-Vax
Subject: Infinite loops and Turing machines...
Article-I.D.: umcp-cs.3521

        One of the things I did in my undergrad theory class was to prove that
        a multiple-tape Turing machine is equivalent to one with a single tape
        (several tapes were very handy for programming).  Also, we showed that
        a TM with a 2-dimensional tape infinite in both x and y was also
        equivalent to a single-tape TM.  On the other hand, the question of
        a machine with an infinite number of read heads was left open...

Aha!  I knew someone would come up with this one!
Consider that when we talk of simultaneous events... we speak of
simultaneous events that occur within one Turing machine state
and outside of the Turing machine itself.  Can a one-tape
Turing machine read the input of 7 discrete sources at once?
A 7 tape machine with 7 heads could!

The reason that they are not equivelent is that we have
allowed for external states (events) outside of the machine
states of the Turing machine itself.
--

                                        - Speaker-To-Stuffed-Animals
                                        speaker@umcp-cs
                                        speaker.umcp-cs@CSnet-Relay

------------------------------

Date: 1 Nov 83 16:56:19-PST (Tue)
From: hplabs!hao!seismo!philabs!linus!security!genrad!mit-eddie!rlh @
      Ucb-Vax
Subject: Re: Parallelism and Conciousness
Article-I.D.: mit-eddi.885

    requirement is based on the maximum speed of
    a processing unit that we can currently build.  If you mean 'at the
    exact same time', then I defy you to show me a case where this is
    necessary.

    The statement "No algorithm is inherently parallel", just means that
    the algorithm itself (as opposed to the engineering of putting it
    into practice) does not necessarily have to be done in parallel.
    Any parallel algorithm that you give me, I can write a sequential
    algorithm that does the same thing.

Consider the retina, and its processing algorithm.  It is certainly
true that once the raw information has been collected and in some way
band-limited, it can be processed in either fashion; but one part of
the algorithm must necessarily be implemented in parallel.  To get
the photon efficiencies that are needed for dark-adapted vision
(part of the specifications for the algorithm) one must have some
continuous, distributed attention to the light field.  If I match
the spatial and temporal resolution of the retina, call it several thousand
by several thousand by some milliseconds, by sequentially scanning with
a single receptor, I can only catch one in several-squared million
photons, not the order of one in ten that our own retina achieves.

------------------------------

Date: 2 Nov 83 19:44:21-PST (Wed)
From: pur-ee!uiucdcs!uicsl!preece @ Ucb-Vax
Subject: Re: Parallelism and Conciousness - (nf)
Article-I.D.: uiucdcs.3633


There is a significant difference between saying "No algorithm is
inherently parallel" and saying "Any algorithm can be carried out
without parallelism."  There are many algorithms that are
inherently parallel. Many (perhaps all) of them can be SIMULATED
without true parallel processing.

I would, however, support the contention that computational models
of natural processes need not follow the same implementations, and
that a serial simulation of a parallel process can produce the
same result.

scott preece
ihnp4!uiucdcs!uicsl!preece

------------------------------

Date: 2 Nov 83 15:22:20-PST (Wed)
From: hplabs!hao!seismo!philabs!linus!security!genrad!grkermit!masscom
      p!kobold!tjt @ Ucb-Vax
Subject: Re: Parallelism and Conciousness
Article-I.D.: kobold.191

Gawd!! Real-time processing with a Turing machine?!
Pure theory indeed!

Turing machines are models for *abstract* computation.  You get to
write an initial string on the tape(s) and start up the machine: it
does not monitor external inputs changing asynchronously.  You can
define your *own* machine which is just like a Turing machine, except
that it *does* monitor external inputs changing asynchronously (Speaker
machines anyone :-).

Also, if you want to talk *pure theory*, I could just enlarge my input
alphabet on a single input to encode all possible simultaneous values
at multiple inputs.


--
        Tom Teixeira,  Massachusetts Computer Corporation.  Littleton MA
        ...!{harpo,decvax,ucbcad,tektronix}!masscomp!tjt   (617) 486-9581

------------------------------

Date: 2 Nov 83 16:28:10-PST (Wed)
From: hplabs!hao!seismo!philabs!linus!security!genrad!grkermit!masscom
      p!kobold!tjt @ Ucb-Vax
Subject: Re: Parallelism and Conciousness
Article-I.D.: kobold.192

In regards to the statement

        No algorithm is inherently parallel.

which has been justified by the ability to execute any "parallel"
program on a single sequential processor.

The difference between parallel and sequential algorithms is one of
*expressive* power rather than *computational* power.  After all, if
it's just computational power you want, why aren't you all programming
Turing machines?

The real question is what is the additional *expressive* power of
parallel programs.  The additional expressive power of parallel
programming languages is a result of not requiring the programmer to
serialize steps of his computation when he is uncertain whether either
one will terminate.
--
        Tom Teixeira,  Massachusetts Computer Corporation.  Littleton MA
        ...!{harpo,decvax,ucbcad,tektronix}!masscomp!tjt   (617) 486-9581

------------------------------

Date: 4 Nov 83 8:13:22-PST (Fri)
From: hplabs!hao!seismo!ut-sally!ut-ngp!utastro!nather @ Ucb-Vax
Subject: Our Parallel Eyeballs
Article-I.D.: utastro.784


        Consider the retina, and its processing algorithm. [...]

There seems to be a misconception here.  It's not clear to me that "parallel
processing" includes simple signal accumulation.  Astronomers use area
detectors that simply accumulate the charge deposited by photons arriving
on an array of photosensitive diodes; after the needed "exposure" the charge
image is read out (sequentially) for display, further processing, etc.
If the light level is high, readout can be repeated every few milliseconds,
or, in some devices, proceed continuously, allowing each pixel to accumulate
photons between readouts, which reset the charge to zero.

I note in passing that we tend to think sequentially (our self-awareness
center seems to be serial) but operate in parallel (our heart beats along,
and body chemistry gets its signals even when we're chewing gum).  We
have, for the most part, built computers in our own (self)image: serial.
We're encountering real physical limits in serial computing (the finite
speed of light) and clearly must turn to parallel operations to go much
faster.  How we learn to "think in parallel" is not clear, but people
who do the logic design of computers try to get as many operations into
one clock cycle as possible, and maybe that's the place to start.

                                         Ed Nather
                                         ihnp4!{ut-sally,kpno}!utastro!nather

------------------------------

Date: 3 Nov 83 9:39:07-PST (Thu)
From: decvax!microsoft!uw-beaver!ubc-visi!majka @ Ucb-Vax
Subject: Get off the Turing Machines
Article-I.D.: ubc-visi.513

From: Marc Majka <majka@ubc-vision.UUCP>

A Turing machine is a theoretical model of computation.
<speaker.umcp-cs@CSnet-Relay> points out that all this noise about
"simultaneous events" is OUTSIDE of the notion of a Turing machine. Turing
machines are a theoretical formulation which gives theoreticians a formal
system in which to consider problems in computability, decidability, the
"hardness" of classes of functions, and etc.  They don't really care whether
set membership in a class 0 grammer is decidable in less than 14.2 seconds.
The unit of time is the state transition, or "move" (as Turing called it).
If you want to discuss time (in seconds or meters), you are free to invent a
new model of computation which includes that element.  You are then free to
prove theorems about it and attempt to prove it equivalent to other models
of computation.  Please do this FORMALLY and post (or publish) your results.
Otherwise, invoking Turing machines is a silly and meaningless exercise.

Marc Majka

------------------------------

Date: 3 Nov 83 19:47:04-PST (Thu)
From: pur-ee!uiucdcs!uicsl!preece @ Ucb-Vax
Subject: Re: Parallelism and Conciousness - (nf)
Article-I.D.: uiucdcs.3677


Arguments based on speed of processing aren't acceptable.  The
question of whether parallel processing is required has to be
in the context of arbitrarily fast processors.  Thus you can't
talk about simultaneous inputs changing state at processor speed
(unless you're considering the interesting case where the input
is directly monitoring the processor itself and therefore
intrinsically as fast as the processor; in that case you can't
cope, but I'm not sure it's an interesting case with respect to
consciousness).

Consideration of the retina, on the other hand, brings up the
basic question of what is a parallel processor.  Is an input
latch (allowing delayed polling) or a multi-input averager a
parallel process or just part of the plumbing? We can also, of
course, group the input bits and assume an arbitrarily fast
processor dealing with the bits 64 (or 128 or 1 million) at a
time.

I don't think I'd be willing to say that intelligence or
consciousness can't be slow. On the other hand, I don't think
there's too much point to this argument, since it's pretty clear
that producing a given level of performance will be easier with
parallel processing.

scott preece
ihnp4!uiucdcs!uicsl!preece

------------------------------

End of AIList Digest
********************

∂07-Nov-83  1507	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #92
Received: from SRI-AI by SU-AI with TCP/SMTP; 7 Nov 83  15:06:39 PST
Date: Sunday, November 6, 1983 11:06PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #92
To: AIList@SRI-AI


AIList Digest             Monday, 7 Nov 1983       Volume 1 : Issue 92

Today's Topics:
  Halting Problem,
  Metaphysics,
  Intelligence
  ----------------------------------------------------------------------

Date: 31 Oct 83 19:13:28-PST (Mon)
From: harpo!floyd!clyde!akgua!psuvax!simon @ Ucb-Vax
Subject: Re: Semi-Summary of Halting Problem Discussion
Article-I.D.: psuvax.335

About halting:
it is unclear what is meant precisely by "can a program of length n
decide whether programs of length <= n will halt".  First, the input
to the smaller programs is not specified in the question.  Assuming
that it is a unique input for each program, known a priori (for
example, the index of the program), then the answer is obviously YES
for the following restriction: the deciding program has size 2**n and
decides on smaller programs (there are a few constants that are
neglected too). There are less than 2*2**n programs of length <=n. For
each represent halting on the specific input the test is to apply to
by 1, looping by 0. The resulting string is essentially the program
needed - it clearly exists. Getting hold of it is another matter - it
is also obvious that this cannot be done in a uniform manner for every
n because of the halting problem.  At the cost of more sophisticated
coding, and tremendous expenditure of time, a similar construction can
be made to work for programs of length O(n).


If the input is not fixed, the question is obviously hopeless - there are
very small universal programs.

As a practical matter it is not the halting proble that is relevant, but its
subrecursive analogues.
janos simon

------------------------------

Date: 3 Nov 83 13:03:22-PST (Thu)
From: harpo!eagle!mhuxl!mhuxm!pyuxi!pyuxss!aaw @ Ucb-Vax
Subject: Re: Halting Problem Discussion
Article-I.D.: pyuxss.195

A point missing in this discussion is that the halting problem is
equivalent to the question:
        Can a method be formulated to attempt to solve ANY problem
        which can determine if it is not getting closer to the
        solution
so the meta-halters (not the clothing) can't be more than disguised
time limits etc. for the general problem, since they CAN NOT MAKE
INFERENCES ABOUT THE PROCESS they are to halt
                Aaron Werman pyuxi!pyuxss!aaw

------------------------------

Date: 9 Nov 83 21:05:28-EST (Wed)
From: pur-ee!uiucdcs!uokvax!andree @ Ucb-Vax
Subject: Re: re: awareness - (nf)
Article-I.D.: uiucdcs.3586


Robert -

If I understand correctly, your reasons for preferring dualism (or
physicalism) to functionalism are:

        1) It seems more intuitively obvious.
        2) You are worried about legal/ethical implications of functionalism.

I find that somewhat amusing, as those are EXACTLY my reasons for
prefering functionalism to either dualism or physicalism. The legal
implications of differentiating between groups by arbitrarily denying
`souls' to one is well-known; it usually leads to slavery.

        <mike

------------------------------

Date: Saturday, 5 November 1983, 03:03-EST
From: JCMA@MIT-AI
Subject: Inscrutable Intelligence

    From: Dan Carnese <DJC%MIT-OZ@MIT-MC.ARPA>

    Trying to find the ultimate definition for field-naming terms is a
    wonderful, stimulating philosophical enterprise.

I think you missed the point all together.  The idea is that *OPERATIONAL
DEFINITIONS* are known to be useful and are found in all mature disciplines
(e.g., physics).  The fact that AI doesn't have an operation definition of
intelligence simply points up the fact that the field of inquiry is not yet a
discipline.  It is a proto-discipline precisely because key issues remain
vague and undefined and because there is no paradigm (in the Khunian sense of
the term, not popular vulgarizations).

That means that it is not possible to specify criteria for certification in
the field, not to mention the requisite curriculum for the field.  This all
means that there is lots of work to be done before AI can enter the normal
science phase.

    However, one can make an empirical argument that this activity has little
    impact on technical progress.

Let's see your empirical argument.  I haven't noticed any intelligent machines
running around the AI lab lately.  I certainly haven't noticed any that can
carry on any sort of reasonable conversation.  Have you?  So, where is all
this technical progress regarding understanding intelligence?

Make sure you don't fall into the trap of thinking that intelligent machines
are here today (Douglas Hofstadter debunks this position in his "Artificial
Intelligence: Subcognition as Computation," CS Dept., Indiana U., Nov. 1982).

------------------------------

Date: 5 November 1983 15:38 EST
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Turing test in everyday life

Have you ever gotten one of those phone calls from people who are trying
to sell you a magazine subscription?  Those people sound *awfully* like
computers!  They have a canned speech, with canned places to wait for
human (customer) response, and they seem to have a canned answer to
anything you say.  They are also *boring*!

I know the entity at the other end of the line is not a computer
(because they recognize my voice -- someone correct me if this is not a
good test) but we might ask: how good would a computer program have to
be to fool someone into thinking that it is human, in this limited case?
I suspect you wouldn't have to do much, since the customer doesn't
expect much from the salescreature who phones.  Perhaps there is a
lesson here.

-- Steve

[There is a system, in use, that can recognize affirmative and negative
replies to its questions.  It also stores a recording of your responses
and can play the recording back to you before ending the conversation.
The system is used for selling (e.g., record albums) and for dunning,
and is effective partly because it is perceived as "mechanical".  People
listen to it because of the novelty, it can be programmed to make negative
responses very difficult, and the playback of your own replies is very
effective.  -- KIL]

------------------------------

Date: 1 Nov 83 13:41:53-PST (Tue)
From: hplabs!hao!seismo!uwvax!reid @ Ucb-Vax
Subject: Slow Intelligence
Article-I.D.: uwvax.1129

When people's intelligence is evaluated, at least subjectively, it is common
to hear such things as "He is brilliant but never applies himself," or "She
is very intelligent, but can never seem to get anything accomplished due to
her short attention span."  This seems to imply to me that intelligence is
sort of like voltage--it is potential.  Another analogy might be a
weight-lifter, in the sense that no one doubts her
ability to do amazing physical things, based on her appearance, but she needn't
prove it on a regular basis....  I'm not at all sure that people's working
definition of intelligence has anything at all to do with either time or sur-
vival.



Glenn Reid
..seismo!uwvax!reid  (reid@uwisc.ARPA)

------------------------------

Date: 2 Nov 83 8:08:19-PST (Wed)
From: harpo!eagle!mhuxl!ulysses!unc!mcnc!ecsvax!unbent @ Ucb-Vax
Subject: intelligence and adaptability
Article-I.D.: ecsvax.1466

Just two quick remarks from a philosopher:

1.  It ain't just what you do; it's how you do it.
Chameleons *adapt* to changing environments very quickly--in a way
that furthers their goal of eating lots of flies.  But what they're doing
isn't manifesting *intelligence*.

2.   There's adapting and adapting.  I would have thought that
one of the best evidences of *our* intelligence is not our ability to
adapt to new environments, but rather our ability to adapt new
environments to *us*.  We don't change when our environment changes.
We build little portable environments which suit *us* (houses,
spaceships), and take them along.

------------------------------

Date: 3 Nov 83 7:51:42-PST (Thu)
From: decvax!tektronix!ucbcad!notes @ Ucb-Vax
Subject: What about physical identity? - (nf)
Article-I.D.: ucbcad.645


        It's surprising to me that people are still speaking in terms of
machine intelligence unconnected with a notion of a physical host that
must interact with the real world.  This is treated as a trivial problem
at most (I think Ken Laws said that one could attach any kind of sensing
device, and hence (??) set any kind of goal for a machine).  So why does
Hubert Dreyfus treat this problem as one whose solution is a *necessary*,
though not sufficient, condition for machine intelligence?

        But is it a solved problem?  I don't think so--nowhere near, from
what I can tell.  Nor is it getting the attention it requires for solution.
How many robots have been built that can infer their own physical limits
and capabilities?

        My favorite example is the oft-quoted SHRDLU conversation; the
following exchange has passed for years without comment:

        ->  Put the block on top of the pyramid
        ->  I can't.
        ->  Why not?
        ->  I don't know.

(That's not verbatim.)  Note that in human babies, fear of falling seems to
be hardwired.  It will still attempt, when old enough, to do things like
put a block on top of a pyramid--but it certainly doesn't seem to need an
explanation for why it should not bother after the first few tries.  (And
at that age, it couldn't understand the explanation anyway!)

        SHRDLU would have to be taken down, and given another "rule".
SHRDLU had no sense of what it is to fall down.  It had an arm, and an
eye, but only a rather contrived "sense" of its own physical identity.
It is this sense that Dreyfus sees as necessary.
---
Michael Turner (ucbvax!ucbesvax.turner)

------------------------------

Date: 4 Nov 83 5:57:48-PST (Fri)
From: ihnp4!ihuxn!ruffwork @ Ucb-Vax
Subject: RE:intelligence and adaptability
Article-I.D.: ihuxn.400

I would tend to agree that it's not how a being adapts to its
environment, but how it changes the local environment to better
suit itself.

Also, I would have to say that adapting the environment
would only aid in ranking the intelligence of a being if that
action was a voluntary decision.  There are many instances
of creatures that alter their surroundings (water spiders come
to mind), but could they decide not to ???  I doubt it.

                        ...!iham1!ruffwork

------------------------------

Date: 4 Nov 83 15:36:33-PST (Fri)
From: harpo!eagle!hou5h!hou5a!hou5d!mat @ Ucb-Vax
Subject: Re: RE:intelligence and adaptability
Article-I.D.: hou5d.732

Man is the toolmaker and the principle tooluser of all the living things
that we know of.  What does this mean?

Consider driving a car or skating.  When I do this, I have managed to
incorporate an external system into my own control system with its myriad
of pathways both forward and backward.

This takes place at a level below that which usually is considered to
constitute intelligent thought.  On the other hand, we can adopt external
things into our thought-model of the world in a way which no other creature
seems to be capable of.

Is there any causal relationship here?

                                        Mark Terribile
                                        DOdN

------------------------------

Date: 6 Nov 1983 20:54-PST
From: fc%usc-cse%USC-ECL@SRI-NIC
Subject: Re: AIList Digest   V1 #90

        Irwin Marin's course in AI started out by asking us to define
the term 'Natural Stupidity'. I guess artificial intelligence must be
anything both unnatural and unstupid. We had a few naturally stupid
examples to work with, so we got a definition quite quickly. Naturally
stupid types were unable to adapt, unable to find new representations,
and made of flesh and bone. Artificially intelligent types were
machines designed to adapt their responses and seek out more accurate
representations of their environment and themselves. Perhaps this would
be a good 'working' definition. At any rate, definitions are only
'working' if you work with them. If you can work with this one I
suggest you go to it and stop playing with definitions.
                FC

------------------------------

End of AIList Digest
********************

∂07-Nov-83  2011	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #93
Received: from SRI-AI by SU-AI with TCP/SMTP; 7 Nov 83  20:11:00 PST
Date: Monday, November 7, 1983 1:11PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #93
To: AIList@SRI-AI


AIList Digest            Tuesday, 8 Nov 1983       Volume 1 : Issue 93

Today's Topics:
  Implementations - Lisp for MV8000,
  Expert Systems - Troubleshooting & Switching Systems,
  Alert - IEEE Spectrum,
  Fifth Generation - Stalking The Gigalip,
  Intelligence - Theoretical Speed,
  Humor - Freud Reference,
  Metadiscussion - Wittgenstein Quote,
  Seminars - Knowledge Representation & Logic Programming,
  Conferences - AAAI-84 Call for Papers
----------------------------------------------------------------------

Date: Tue, 1 Nov 83 16:51:42 EST
From: Michael Fischer <Fischer@YALE.ARPA>
Subject: Lisp for MV8000

The University of New Haven is looking for any version of Lisp that
runs on a Data General MV8000, or for a portable Lisp written in Fortran
or Pascal that could be brought up in a short time.

Please reply to me by electronic mail and I will bring it to their
attention, or contact Alice Fischer directly at (203) 932-7069.

                           --  Michael Fischer <Fischer@YALE.ARPA>

------------------------------

Date: 5 Nov 83 21:31:57-EST (Sat)
From: decvax!microsoft!uw-beaver!tektronix!tekig1!sal @ Ucb-Vax
Subject: Expert systems for troubleshooting
Article-I.D.: tekig1.1442

I am in the process of evaluating the feasibility of developing expert
systems for troubleshooting instruments and functionally complete
circuit boards.  If anyone has had any experience in this field or has
seen a similar system, please get in touch with me either through the
net or call me at 503-627-3678 during 8:00am - 6:00pm PST.  Thanks.

                                    Salahuddin Faruqui
                                    Tektronix, Inc.
                                    Beaverton, OR 97007.

------------------------------

Date: 4 Nov 83 17:20:42-PST (Fri)
From: ihnp4!ihuxl!pvp @ Ucb-Vax
Subject: Looking for a rules based expert system.
Article-I.D.: ihuxl.707

I am interested in obtaining a working version of a rule based
expert system, something on the order of RITA, ROSIE, or EMYCIN.
I am interested in the knowledge and inference control structure,
not an actual knowledge base. The application would be in the
area of switching system maintenance and operation.

I am in the 5ESS(tm) project, and so prefer a Unix based product,
but I would be willing to convert a different type if necessary.
An internal BTL product would be desirable, but if anyone knows
about a commercially available system, I would be interested in
evaluating it.

Thanks in advance for your help.

                Philip Polli
                BTL Naperville
                IX 1F-474
                (312) 979-0834
                ihuxl!pvp

------------------------------

Date: Mon 7 Nov 83 09:50:29-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: IEEE Spectrum Alert

The November issue of IEEE Spectrum is devoted to the 5th Generation.
In addition to the main survey (which includes some very detailed tables
about sources of funding), there are:

  A review of Feigenbaum and McCorduck's book, by Mark Stefik.

  A glossary (p. 39) of about 25 AI and CS terms, taken from
  Gevarter's Overview of AI and Robotics for NASA.

  Announcement (p. 126) of The Artificial Intelligence Report, a
  newsletter for people interested in AI but not engaged in research.
  It will begin in January; no price is given.  Contact Artificial
  Intelligence Publications, 95 First St., Los Altos, CA  94022,
  (415) 949-2324.

  Announcement (p. 126) of a tour of Japan for those interested in
  the 5th Generation effort.

  Brief discussion (p. 126) of Art and Computers: The First Artificial-
  Intelligence Coloring Book, a set of line drawings by an artist-taught
  rule-based system.

  An interesting parable (p. 12) for those who would educate the public
  about AI or any other topic.

                                        -- Ken Laws

------------------------------

Date: 5-Nov-83 10:41:44-CST (Sat)
From: Overbeek@ANL-MCS (Overbeek)
Subject: Stalking The Gigalip

                 [Reprinted from the Prolog Digest.]

E. W. Lusk and I recently wrote a short note concerning attempts
to produce high-speed Prolog machines.  I apologize for perhaps
restating the obvious in the introduction.  In any event we
solicit comments.


                              Stalking the Gigalip

                                   Ewing Lusk

                                Ross A. Overbeek

                   Mathematics and Computer Science Division
                          Argonne National Laboratory
                            Argonne, Illinois 60439


          1.  Introduction

               The Japanese have recently established the goal of pro-
          ducing a machine capable of producing between 10 million and
          1 billion logical inferences per  second  (where  a  logical
          inference  corresponds  to  a  Prolog procedure invocation).
          The motivating belief is that logic programming unifies many
          significant  areas of computer science, and that expert sys-
          tems based on logic programming will be the dominant  appli-
          cation  of  computers  in  the 1990s.  A number of countries
          have at least considered  attempting  to  compete  with  the
          Japanese  in  the  race  to attain a machine capable of such
          execution rates.  The United States  funding  agencies  have
          definitely  indicated  a  strong  desire to compete with the
          Japanese in the creation of such a logic engine, as well  as
          in  the  competition  to  produce  supercomputers  that  can
          deliver at least two orders of magnitude improvement  (meas-
          ured in megaflops) over current machines.  Our goal in writ-
          ing this short note is to offer some opinions on how  to  go
          about  creating  a machine that could execute a gigalip.  It
          is certainly true that the entire goal of  creating  such  a
          machine should be subjected to severe criticism.  Indeed, we
          feel that it is probably the case that a majority of  people
          in the AI research community feel that it offers (at best) a
          misguided effort.  Rather  than  entering  this  debate,  we
          shall  concentrate  solely  on discussing an approach to the
          goal.  In our opinion a significant component of many of the
          proposed  responses  by  researchers in the United States is
          based on the unstated assumption that the goal itself is not
          worth pursuing, and that the benefits will accrue from addi-
          tional funding to areas in AI that only minimally impinge on
          the stated objective.

[ This paper is available on {SU-SCORE} as:

       PS:<Prolog>ANL-LPHunting.Txt

  There is a limited supply of hard copies that
  can be mailed to those with read-only access
  to this newsletter  -ed ]

------------------------------

Date: Monday, 7 November 1983 12:03:23 EST
From: Robert.Frederking@CMU-CS-CAD
Subject: Intelligence; theoretical speed

        Not to stir this up again, but around here, some people like the
definition that intelligence is "knowledge brought to bear to solve
problems".  This indicates that you need knowledge, ways of applying it, and
a concept of a "problem", which implies goals.  One problem with measuring
human "IQ"s is that you almost always end up measuring (at least partly) how
much knowledge someone has, and what culture they're part of, as well as the
pure problem solving capabilities (if any such critter exists).

        As for the theoretical speed of processing, the speed of light is a
theoretical limit on the propagation of information (!), not just matter, so
the maximum theoretical cycle speed of a processor with a one foot long
information path (mighty small) is a nanosecond (not too fast!).  So the
question is, what is the theoretical limit on the physical size of a
processor?  (Or, how do you build a transistor out of three atoms?)

------------------------------

Date: 4 Nov 83 7:01:30-PST (Fri)
From: harpo!eagle!mhuxl!mhuxm!pyuxi!pyuxss!aaw @ Ucb-Vax
Subject: Humor
Article-I.D.: pyuxss.196

[Semi-Summary of Halting Problem Disc]
must have been some kind of joke. Sigmunds' book is a real layman
thing, and in it he asserts that the joke
    a: where are you going?
    b: MINSKY
    a: you said "minsky" so I'd think you are going to "pinsky".  I
       happen to know you are going to "minsky" so whats the use in lying?
is funny.
                                aaron werman pyuxi!pyuxss!aaw

------------------------------

Date: 05 Nov 83  1231 PST
From: Jussi Ketonen <JK@SU-AI>
Subject: Inscrutable Intelligence

On useless discussions - one more quote by Wittgenstein:
        Wovon man nicht sprachen kann, darueber muss man schweigen.

------------------------------

Date: 05 Nov 83  0910 PST
Date: Fri, 4 Nov 83 19:28 PST
From: Moshe Vardi <vardi@Diablo>
Subject: Knowledge Seminar

Due to the overwhelming response to my announcement and the need to
find a bigger room, the first meeting is postponed to Dec. 9,
10:00am.

Moshe Vardi

------------------------------

Date: Thu, 3 Nov 1983  22:50 EST
From: HEWITT%MIT-OZ@MIT-MC.ARPA
Subject: SEMINAR

               [Forwarded by SASW@MIT-MC.]


        Date:  Thursday, November 10, l983   3:30 P.M.
        Place: NE43 8th floor Playroom
        Title: "Some Fundamental Limitations of Logic Programming"
        Speaker: Carl Hewitt

Logic Programming has been proposed by some as the universal
programming paradigm for the future.  In this seminar I will discuss
some of the history of the ideas behind Logic Programming and assess
its current status.  Since many of the problems with current Logic
Programming Languages such as Prolog will be solved, it is not fair to
base a critique of Logic Programming by focusing on the particular
limitations of languages like Prolog.  Instead I will focus discussion
on limitations which are inherent in the enterprise of attempting to
use logic as a programming language.

------------------------------

Date: Thu 3 Nov 83 10:44:08-PST
From: Ron Brachman <Brachman at SRI-KL>
Subject: AAAI-84 Call for Papers


                          CALL FOR PAPERS


                              AAAI-84


        The 1984 National Conference on Artificial Intelligence

   Sponsored by the American Association for Artificial Intelligence
     (in cooperation with the Association for Computing Machinery)

                 University of Texas, Austin, Texas

                         August 6-10, 1984

AAAI-84 is the fourth national conference sponsored by the American
Association for Artificial Intelligence.  The purpose of the conference
is to promote scientific research of the highest caliber in Artificial
Intelligence (AI), by bringing together researchers in the field and by
providing a published record of the conference.


TOPICS OF INTEREST

Authors are invited to submit papers on substantial, original, and
previously unreported research in any aspect of AI, including the
following:

AI and Education                        Knowledge Representation
     (including Intelligent CAI)        Learning
AI Architectures and Languages          Methodology
Automated Reasoning                        (including technology transfer)
     (including automatic program-      Natural Language
      ming, automatic theorem-proving,      (including generation,
      commonsense reasoning, planning,       understanding)
      problem-solving, qualitative      Perception (including speech, vision)
      reasoning, search)                Philosophical and Scientific
Cognitive Modelling                                Foundations
Expert Systems                          Robotics



REQUIREMENTS FOR SUBMISSION

Timetable:  Authors should submit five (5) complete copies of their
papers (hard copy only---we cannot accept on-line files) to the AAAI
office (address below) no later than April 2, 1984.  Papers received
after this date will be returned unopened.  Notification of acceptance
or rejection will be mailed to the first author (or designated
alternative) by May 4, 1984.

Title page:  Each copy of the paper should have a title page (separate
from the body of the paper) containing the title of the paper, the
complete names and addresses of all authors, and one topic from the
above list (and subtopic, where applicable).

Paper body:  The authors' names should not appear in the body of the
paper.  The body of the paper must include the paper's title and an
abstract.  This part of the paper must be no longer than thirteen (13)
pages, including figures but not including bibliography.  Pages must be
no larger than 8-1/2" by 11", double-spaced (i.e., no more than
twenty-eight (28) lines per page), with text no smaller than standard
pica type (i.e., at least 12 pt. type).  Any submission that does not
conform to these requirements will not be reviewed.  The publishers will
allocate four pages in the conference proceedings for each accepted
paper, and will provide additional pages at a cost to the authors of
$100.00 per page over the four page limit.

Review criteria:  Each paper will be stringently reviewed by experts in
the area specified as the topic of the paper.  Acceptance will be based
on originality and significance of the reported research, as well as
quality of the presentation of the ideas.  Proposals, surveys, system
descriptions, and incremental refinements to previously published work
are not appropriate for inclusion in the conference.  Applications
clearly demonstrating the power of established techniques, as well as
thoughtful critiques and comparisons of previously published material
will be considered, provided that they point the way to new research in
the field and are substantive scientific contributions in their own
right.


Submit papers and                     Submit program suggestions
   general inquiries to:                    and inquiries to:

American Association for              Ronald J. Brachman
    Artificial Intelligence           AAAI-84 Program Chairman
445 Burgess Drive                     Fairchild Laboratory for
Menlo Park, CA  94025                    Artificial Intelligence Research
(415) 328-3123                        4001 Miranda Ave., MS 30-888
AAAI-Office@SUMEX                     Palo Alto, CA  94304
                                      Brachman@SRI-KL

------------------------------

End of AIList Digest
********************

∂10-Nov-83  0230	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #94
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Nov 83  02:30:14 PST
Date: Wednesday, November 9, 1983 1:34PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #94
To: AIList@SRI-AI


AIList Digest           Wednesday, 9 Nov 1983      Volume 1 : Issue 94

Today's Topics:
  Metaphysics - Functionalism vs Dualism,
  Ethics - Implications of Consciousness,
  Alert - Turing Biography,
  Theory - Parallel vs. Sequential & Ultimate Speed,
  Intelligence - Operational Definitions
----------------------------------------------------------------------

Date: Mon 7 Nov 83 18:30:07-PST
From: WYLAND@SRI-KL.ARPA
Subject: Functionalism vs Dualism in consciousness

        The argument of functionalism versus dualism is
unresolvable because the models are based on different,
complementry paradigms:

      * The functionalism model is based on the reductionist
    approach, the approach of modern science, which explains
    phenomena by logically relating them to controlled,
    repeatable, publically verifiable experiments.  The
    explanations about falling bodies and chemical reactions are
    in this catagory.

      * The dualism model is based on the miraculous approach,
    which explains phenomena as singular events, which are by
    definition not controlled, not repeatable, not verifiable,
    and not public - i.e., the events are observed by a specific
    individual or group.  The existance of UFO's, parapsychology,
    and the existance of externalized consciosness (i.e. soul) is
    in this catagory.

        These two paradigms are the basis of the argument of
Science versus Religion, and are not resolvable EITHER WAY.  The
reductionist model, based on the philosophy of Parminides and
others, assumes a constant, unchanging universe which we discover
through observation.  Such a universe is, by definition,
repeatable and totally predictable: the concept that we could
know the total future if we knew the position and velocity of all
particles derives from this.  The success of Science at
predicting the future is used as an argument for this paradigm.

        The miraculous model assumes the reality of change, as
put forth by Heraclitus and others.  It allows reality to be
changed by outside forces, which may or may not be knowable
and/or predictable.  Changes caused by outside forces are, by
definition, singular events not caused by the normal chains of
causality.  Our personal consciousness and (by extension,
perhaps) the existance of life in the universe are singular
events (as far as we know), and the basic axioms of any
reductionist model of the universe are, by definition,
unexplainable because they must come from outside the system.

        The argument of functionalism versus dualism is not
resolvable in a final sense, but there are some working rules we
can use after considering both paradigms.  Any definition of
intellegence, consciousness (as opposed to Consciousness), etc.
has to be based on the reductionist model: it is the only way we
can explain things in such a manner that we can predict results
and prove theories.  On the other hand, the concept that all
sources of consciousness are mechanical is a religious position: a
catagorical assumption about reality.  It was not that long ago
that science said that stones do not fall from the sky; all
it would take to make UFOs accepted as fact would be for one to
land and set up shop as a merchant dealing in rugs and spices
from Aldebaran and Vega.

------------------------------

Date: Tuesday, 8 November 1983 14:24:55 EST
From: Robert.Frederking@CMU-CS-CAD
Subject: Ethics and Definitions of Consciousness

        Actually, I believe you'll find that slavery has existed both with
and without believing that the slave had a soul.  In many ancient societies
slaves were of identically the same stock as yourself, they had just run
into serious economic difficulties.  As I recall, slavery of the blacks in
the U.S. wasn't justified by their not having souls, but by claiming they
were better off (or similar drivel).  The fact that denying other people had
souls was used at some time to justify it doesn't bother me, since all kinds
of other rationalizations have been used.

        Now we are approaching the time when we will have intelligent
mechanical slaves.  Are you advocating that it should be illegal to own
robots that can pass the Turing (or other similar) test?  I think that a
very important thing to consider is that we can probably make a robot really
enjoy being a slave, by setting up the appropriate top-level goals.  Should
this be illegal?  I think not.  Suppose we reach the point where we can
alter fetuses (see "Brave New World" by Aldous Huxley) to the point where
they *really* enjoy being slaves to whoever buys them.  Should this be
illegal?  I think so.  What if we build fetuses from scratch?  Harder to
say, but I suspect this should be illegal.

        The most conservative (small "c") approach to the problem is to
grant human rights to anything that *might* qualify as intelligent.  I think
this would be a mistake, unless you allow biological organisms a distinction
as outlined above.  The next most conservative approach seems to me to leave
the situation where it is today: if it is physically an independent human
life, it has legal rights.

------------------------------

Date: 8 Nov 1983 09:26-EST
From: Jon.Webb@CMU-CS-IUS.ARPA
Subject: parallel vs. sequential

Parallel and sequential machines are not equivalent, even in abstract
models.  For example, an absract parallel machine can generate truly
random numbers by starting two processes at the same time, which are
identical except that one sends the main processor a "0" and the other
sends a "1". The main processor accepts the first number it receives.
A Turing machine can generate only pseudo-random numbers.

However, I do not believe a parallel machine is more powerful (in the
formal sense) than a Turing machine with a true random-number
generator.  I don't know of a proof of this; but it sounds like
something that work has been done on.

Jon

------------------------------

Date: Tuesday, 8-Nov-83  18:33:07-GMT
From: O'KEEFE HPS (on ERCC DEC-10) <okeefe.r.a.@edxa>
Reply-to: okeefe.r.a. <okeefe.r.a.%edxa@ucl-cs>
Subject: Ultimate limit on computing speed

--------
    There was a short letter about this in CACM about 6 or 7 years ago.
I haven't got the reference, but the argument goes something like this.

1.  In order to compute, you need a device with at least two states
    that can change from one state to another.
2.  Information theory (or quantum mechanics or something, I don't
    remember which) shows that any state change must be accompanied
    by a transfer of at least so much energy (a definite figure was
    given).
3.  Energy contributes to the stress-energy tensor just like mass and
    momentum, so the device must be at least so big or it will undergo
    gravitational collapse (again, a definite figure).
4.  It takes light so long to cross the diameter of the device, and
    this is the shortest possible delay before we can definitely say
    that the device is in its new state.
5.  Therefore any physically realisable device (assuming the validity
    of general relativity, quantum mechanics, information theory ...)
    cannot switch faster than (again a definite figure).  I think the
    final figure was 10↑-43 seconds, but it's been a long time since
    I read the letter.


     I have found the discussion of "what is intelligence" boring,
confused, and unhelpful.  If people feel unhappy working in AI because
we don't have an agreed definition of the I part (come to that, do we
*really* have an agreed definition of the A part either?  if we come
across a planet inhabited by metallic creatures with CMOS brains that
were produced by natural processes, should their study belong to AI
or xenobiology, and does it matter?) why not just change the name of
the field, say to "Epistemics And Robotics".  I don't give a tinker's
curse whether AI ever produces "intelligent" machines; there are tasks
that I would like to see computers doing in the service of humanity
that require the representation and appropriate deployment of large
amounts of knowledge.  I would be just as happy calling this AI, MI,
or EAR.

     I think some of the contributors to this group are suffering from
physics envy, and don't realise what an operational definition is.  It
is a definition which tells you how to MEASURE something.  Thus length
is operationally defined by saying "do such and such.  Now, length is
the thing that you just measured."  Of course there are problems here:
no amount of operational definition will justify any connection between
"length-measured-by-this-foot-rule-six-years-ago" and "length-measured-
by-laser-interferometer-yesterday".  The basic irrelevance is that
an operational definition of say light (what your light meter measures)
doesn't tell you one little thing about how to MAKE some light.  If we
had an operational definition of intelligence (in fact we have quite a
few, and like all operational definitions, nothing to connect them) there
is no reason to expect that to help us MAKE something intelligent.

------------------------------

Date: 7 Nov 83 20:50:48 PST (Monday)
From: Hoffman.es@PARC-MAXC.ARPA
Subject: Turing biography

Finally, there is a major biography of Alan Turing!

        Alan Turing: The Enigma
        by Andrew Hodges
        $22.50  Simon & Schuster
        ISBN 0-671-49207-1

The timing is right:  His war-time work on the Enigma has now been
de-classified.  His rather open homosexuality can be discussed in other
than damning terms these days.  His mother passed away in 1976. (She
maintained that his death in 1954 was not suicide, but an accident, and
she never mentioned his sexuality nor his 1952 arrest.)  And, of course,
the popular press is full of stories on AI, and they always bring up the
Turing Test.

The book is 529 pages, plus photographs, some diagrams, an author's note
and extensive bibliographic footnotes.

Doug Hofstadter's review of the book will appear in the New York Times
Book Review on November 13.

--Rodney Hoffman

------------------------------

Date: Mon,  7 Nov 83 15:40:46 CST
From: Robert.S.Kelley <kelleyr.rice@Rand-Relay>
Subject: Operational definitions of intelligence

  p.s.  I can't imagine that psychology has no operational definition of
  intelligence (in fact, what is it?).  So, if worst comes to worst, AI
  can just borrow psychology's definition and improve on it.

     Probably the most generally accepted definition of intelligence in
psychology comes from Abraham Maslow's remark (here paraphrased) that
"Intelligence is that quality which best distinguishes such persons as
Albert Einstein and Marie Curie from the inhabitants of a home for the
mentally retarded."  A poorer definition is that intelligence is what
IQ tests measure.  In fact psychologists have sought without success
for a more precise definition of intelligence (or even learning) for
over 100 years.
                                Rusty Kelley
                                (kelleyr.rice@RAND-RELAY)

------------------------------

Date: 7 Nov 83 10:17:05-PST (Mon)
From: harpo!eagle!mhuxl!ulysses!unc!mcnc!ecsvax!unbent @ Ucb-Vax
Subject: Inscrutable Intelligence
Article-I.D.: ecsvax.1488

I sympathize with the longing for an "operational definition" of
'intelligence'--especially since you've got to write *something* on
grant applications to justify all those hardware costs.  (That's not a
problem we philosophers have.  Sigh!)  But I don't see any reason to
suppose that you're ever going to *get* one, nor, in the end, that you
really *need* one.

You're probably not going to get one because "intelligence" is
one of those "open textury", "clustery" kinds of notions.  That is,
we know it when we see it (most of the time), but there are no necessary and
sufficient conditions that one can give in advance which instances of it
must satisfy.  (This isn't an uncommon phenomenon.  As my colleague Paul Ziff
once pointed out, when we say "A cheetah can outrun a man", we can recognize
that races between men and *lame* cheetahs, *hobbled* cheetahs, *three-legged*
cheetahs, cheetahs *running on ice*, etc. don't count as counterexamples to the
claim even if the man wins--when such cases are brought up.  But we can't give
an exhaustive list of spurious counterexamples *in advance*.)

Why not rest content with saying that the object of the game is to get
computers to be able to do some of the things that *we* can do--e.g.,
recognize patterns, get a high score on the Miller Analogies Test,
carry on an interesting conversation?  What one would like to say, I
know, is "do some of the things we do *the way we do them*--but the
problem there is that we have no very good idea *how* we do them.  Maybe
if we can get a computer to do some of them, we'll get some ideas about
us--although I'm skeptical about that, too.

                        --Jay Rosenberg (ecsvax!unbent)

------------------------------

Date: Tue, 8 Nov 83 09:37:00 EST
From: ihnp4!houxa!rem@UCLA-LOCUS


THE MUELLER MEASURE

If an AI could be built to answer all questions we ask it to assure us
that it is ideally human (the Turing Test), it ought to
be smart enough to figure out questions to ask itself
that would prove that it is indeed artificial.  Put another
way: If an AI could make humans think it is smarter than
a human by answering all questions posed to it in a
Turing-like manner, it still is dumber than a human because
it could not ask questions of a human to make us answer
the questions so that it satisfies its desire for us to
make it think we are more artificial than it is.  Again:
If we build an AI so smart it can fool other people
by answering all questions in the Turing fashion, can
we build a computer, anti-Turing-like, that could make
us answer questions to fool other machines
into believing we are artificial?

Robert E. Mueller, Bell Labs, Holmdel, New Jersey

houxa!rem

------------------------------

Date: 9 November 1983 03:41 EST
From: Steven A. Swernofsky <SASW @ MIT-MC>
Subject: Turing test in everyday life

    . . .
    I know the entity at the other end of the line is not a computer
    (because they recognize my voice -- someone correct me if this is not a
    good test) but we might ask: how good would a computer program have to
    be to fool someone into thinking that it is human, in this limited case?

    [There is a system, in use, that can recognize affirmative and negative
    replies to its questions.
    . . .  -- KIL]

No, I always test these callers by interrupting to ask them questions,
by restating what they said to me, and by avoiding "yes/no" responses.

I appears to me that the extremely limited domain, and the utter lack of
expertise which people expect from the caller, would make it very easy to
simulate a real person.  Does the fact of a limited domain "disguise"
the intelligence of the caller, or does it imply that intelligence means
a lot less in a limited domain?

-- Steve

------------------------------

End of AIList Digest
********************

∂09-Nov-83  2344	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #95
Received: from SRI-AI by SU-AI with TCP/SMTP; 9 Nov 83  23:44:13 PST
Date: Wednesday, November 9, 1983 5:08PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #95
To: AIList@SRI-AI


AIList Digest           Thursday, 10 Nov 1983      Volume 1 : Issue 95

Today's Topics:
  Alert - Hacker's Dictionary,
  Conference - Robotic Intelligence and Productivity,
  Tutorial - Machine Translation,
  Report - AISNE meeting
----------------------------------------------------------------------

Date: 8 Nov 1983 1215:19-EST
From: Lawrence Osterman <OSTERMAN@CMU-CS-C.ARPA>
Subject: Guy Steele's

                  [Reprinted from the CMU-C bboard.]

New book is now out.
  The Hacker's Dictionary, Available in the CMU Bookstore
right now.  The cost is 5.95 (6.31 after taxes) and its well
worth getting  (It includes (among other things)  The  COMPLETE
INTERCAL character set (ask anyone in 15-312 last fall),
Trash 80,N, Moby, and many others (El Camino Bignum?))


                        Larry

[According to another message, the CMU bookstore immediately
sold out.  -- KIL]

------------------------------

Date: 7 Nov 1983 1127-PST
From: MEDIONI@USC-ECLC
Subject: Conference announcement


        ******  CONFERENCE ANNOUCEMENT  ******

   ROBOTIC INTELLIGENCE AND PRODUCTIVITY CONFERENCE

        WAYNE STATE UNIVERSITY, DETROIT, MICHIGAN

                 NOVEMBER 18-19, 1983

For more information and advance program, please contact:

Dr Pepe Siy
(313) 577-3841
(313) 577-3920 - Messages

or Dr Singh
(313) 577-3840

------------------------------

Date: Tue 8 Nov 83 10:06:34-CST
From: Jonathan Slocum <LRC.Slocum@UTEXAS-20.ARPA>
Subject: Tutorial Announcement

[The following is copied from a circular, with the author's encouragement.
 Square brackets delimit my personal insertions, for clarification. -- JS]


        THE INSTITUT DALLE MOLLE POUR LES ETUDES SEMANTIQUES ET
       COGNITIVES DE L'UNIVERSITE DE GENEVE ("ISSCO") is to hold

                             a Tutorial on

                          MACHINE TRANSLATION

   from Monday 2nd April to Friday 6th, 1984, in Lugano, Switzerland


The attraction of Machine Translation as an application domain for
computers has long been recognized, but pioneers in the field seriously
underestimated the complexity of the problem.  As a result, early
systems were severely limited.

The design of more recent systems takes into account the
interdisciplinary nature of the task, recognizing that MT involves the
construction of a complete system for the collection, representation,
and strategic deployment of a specialised kind of linguistic knowledge.
This demands contribution from the fields of both theoretical and
computational linguistics, conputer science, and expert system design.

The aim of this tutorial is to convey the state of the art by allowing
experts in different aspects of MT to present their particular points of
view.  Sessions covering the historical development of MT and its
possible future evolution will also be included to provide a tutorial
which should be relevant to all concerned with the relationship between
natural language and computer science.

The Tutorial will take place in the Palazzo dei Congressi or the Villa
Heleneum, both set in parkland on the shore of Lake Lugano, which is
perhaps the most attractive among the lakes of the Swiss/Italian Alps.
Situated to the south of the Alpine massif, Spring is early and warm.
Participants will be accommodated in nearby hotels.  Registration will
take place on the Sunday evening preceding the Tutorial.


COSTS: Fees for registration submitted by January 31, 1984, will be 120
Swiss franks for students, 220 Swiss franks for academic participants,
and 320 Swiss franks for others.  After this date the fees will increase
by 50 Swiss franks for all participants.  The fees cover tuition,
handouts, coffee, etc.  Hotel accommodation varies between 30 and 150
Swiss franks per night [booking form available, see below].  It may be
possible to arrange cheaper [private] accommodation for students.

FOR FURTHER INFORMATION [incl. booking forms, etc.] (in advance of the
Tutorial) please contact ISSCO, 54 route des Acacias, CH-1227 Geneva; or
telephone [41 for Switzerland] (22 for Geneva) 20-93-33 (University of
Geneva), extension ("interne") 21-16 ("vingt-et-un-seize").  The
University switchboard is closed daily from 12 to 1:30 Swiss time.
[Switzerland is six (6) hours ahead of EST, thus 9 hours ahead of PST.]

------------------------------

Date: Tue 8 Nov 83 10:59:12-CST
From: Jonathan Slocum <LRC.Slocum@UTEXAS-20.ARPA>
Subject: Tutorial Program

                         PROVISIONAL PROGRAMME


Each session is scheduled to include a 50-minute lecture followed by a
20-minute discussion period.  Most evenings are left free, but rooms
will be made available for informal discussion, poster sessions, etc.

Sun. 1st   5 p.m. to 9 p.m.  Registration

Mon. 2nd   9:30  Introductory session                   M. King [ISSCO]

          11:20  A non-conformist's view of the         G. Sampson [Lancaster]
                 state of the art
           2:30  Pre-history of Machine Translation     B. Buchmann [ISSCO]

           4:20  SYSTRAN                                P. Wheeler [Commission
                                                           of the European
                                                           Communities]

Tue. 3rd   9:30  An overview of post-65 developments    E. Ananiadou [ISSCO]
                                                        S. Warwick [ISSCO]
          11:20  Software for MT I: background          J.L. Couchard [ISSCO]
                                                        D. Petitpierre  [ISSCO]
           2:30  SUSY                                   D. MAAS [Saarbruecken]

           4:20  TAUM Meteo and TAUM Aviation           P. Isabelle [Montreal]

Wed. 4th   9:30  Linguistic representations in          A. De Roeck [Essex]
                 syntax based MT systems
          11:00  AI approaches to MT                    P. Shann [ISSCO]

          12:00  New developments in Linguistics        E. Wehrli [UCLA]
                 and possible implications for MT
           3:00  Optional excursion

Thu. 5th   9:30  GETA                                   C. Boitet [Grenoble]

          11:20  ROSETTA                                J. Landsbergen [Philips]

           2:30  Software for MT II:                    R. Johnson [Manchester]
                 some recent developments               M. Rosner [ISSCO]
           4:20  Creating an environment for            A. Melby [Brigham Young]
                 the translator
Fri. 5th   9:30  METAL                                  J. Slocum [Texas]

          11:20  EUROTRA                                M. King [ISSCO]

           2:30  New projects in France                 C. Boitet [Grenoble]

           4:20  MT - the future                        A. Zampoli [Pisa]

           5:30  Closing session


There will be a 1/2 hour coffee break between sessions.  The lunch break
is from 12:30 to 2:30.

------------------------------

Date: Mon, 7 Nov 83 14:01 EST
From: Visions <kitchen%umass-cs@CSNet-Relay>
Subject: Report on AISNE meeting (long message)


                        BRIEF REPORT ON
                FIFTH ANNUAL CONFERENCE OF THE
                   AI SOCIETY OF NEW ENGLAND

Held at Brown University, Providence, Rhode Island, 4th-5th November 1983.
Programme Chairman: Drew McDermott (Yale)
Local Arrangements Chairman: Eugene Charniak (Brown)


Friday, 4th November

8:00PM
Long talk by Harry Pople (Pittsburgh), "Where is the expertise in
expert systems?"  Comments and insights about the general state of
work in expert systems.  INTERNIST: history, structure, and example.

9:30PM
"Intense intellectual colloquy and tippling" [Quoted from programme]

LATE
Faculty and students at Brown very hospitably billeted us visitors
in their homes.


Saturday, 5th November

10:00AM
Panel discussion, Ruven Brooks (ITT), Harry Pople (Pittsburgh), Ramesh
Patil (MIT), Paul Cohen (UMass), "Feasible and infeasible expert-systems
applications".  [Unabashedly selective and incoherent notes:]  RB: Expert
systems have to be relevant, and appropriate, and feasible.  There are
by-products of building expert systems, for example, the encouragement of
the formalization of the problem domain.  HP: Historically, considering
DENDRAL and MOLGEN, say, users have ultimately made greater use of the
tools and infrastructure set up by the designers than of the top-level
capabilities of the expert system itself.  The necessity of taking into
account the needs of the users.  RP:  What is an expert system?  Is
MACSYMA no more than a 1000-key pocket calculator?  Comparison of expert
systems against real experts.  Expert systems that actually work --
narrow domains in which hypotheses can easily be verified.  What if the
job of identifying the applicability of an expert system is a harder
problem than the one the expert system itself solves?  In the domains of
medical diagnosis: enormous space of diagnoses, especially if multiple
disorders are considered.  Needed: reasoning about: 3D space, anatomy;
time; multiple disorders, causality; demography; physiology; processes.
HP: A strategic issue in research: small-scale, tractable problems that
don't scale up.  Is there an analogue of Blocksworld?  PC: Infeasible
(but not too infeasible) problems are fit material for research; feasible
problems for development.  The importance of theoretical issues in choosing
an application area for research.  An animated, general discussion followed.

11:30AM
Short talks:
Richard Brown (Mitre), Automatic programming.  Use of knowledge about
programming and knowledge about the specific application domain.
Ken Wasserman (Columbia), "Representing complex physical objects".  For
use in a system that digests patent abstracts.  Uses frame-like represent-
ation, giving parts, subparts, and the relationships between them.
Paul Barth (Schlumberger-Doll), Automatic programming for drilling-log
interpretation, based on a taxonomy of knowledge sources, activities, and
corresponding transformation and selection operations.
Malcolm Cook (UMass), Narrative summarization.  Goal orientations of the
characters and the interactions between them.  "Affect state map".
Extract recognizable patterns of interaction called "plot units".  Summary
based on how these plot units are linked together.  From this summary
structure a natural-language summary of the original can be generated.

12:30PM
Lunch, during which Brown's teaching lab, equipped with 55 Apollos,
was demonstrated.

2:00PM
Panel discussion, Drew McDermott (Yale), Randy Ellis (UMass), Tomas
Lozano-Perez (MIT), Mallory Selfridge (UConn), "AI and Robotics".
DMcD contemplated the effect that the realization of a walking, talking,
perceiving robot would have on AI.  He remarked how current robotics
work does entail a lot of AI, but that there is necessary, robotics-
-specific, ground-work (like matrices, a code-word for "much mathematics").
All the other panelists had a similar view of this inter-relation between
robotics and AI.  The other panelists then sketched robotics work being
done at their respective institutions.  RE:  Integration of vision and
touch, using a reasonable world model, some simple planning, and feedback
during the process.  Cartesian robot, gripper, Ken Overton's tactile array
sensor (force images), controllable camera, Salisbury hand.  Need for AI
in robotics, especially object representation and search.  Learning -- a
big future issue for a robot that actually moves about in the world.
Problems of implementing algorithms in real time.  For getting started in
robotics: kinematics, materials science, control theory, AI techniques,
but how much of each depends on what you want to do in robotics.  TL-P:
A comparatively lengthy talk on "Automatic synthesis of fine motion
strategies", best exemplified by the problem of putting a peg into a hole.
Given the inherent uncertainty in all postions and motions, the best
strategy (which we probably all do intuitively) is to aim the peg just to
one side of the hole, sliding it across into the hole when it hits,
grazing the far side of the hole as it goes down.  A method for generating
such a strategy automatically, using a formalism based on configuration
spaces, generalized dampers, and friction cones.  MS: Plans for commanding
a robot in natural language, and for describing things to it, and for
teaching it how to do things by showing it examples (from which the robot
builds an abstract description, usable in other situations).  A small, but
adequate robotics facility.  Afterwards, an open discussion, during which
was stressed how important it is that the various far-flung branches of AI
be more aware of each other, and not become insular.  Regarding robotics
research, all panelists agreed strongly that it was absolutely necessary
to work with real robot hardware; software simulations could not hope to
capture all the pernickety richness of the world, motion, forces, friction,
slippage, uncertainty, materials, bending, spatial location, at least not
in any computationally practical way.  No substitute for reality!

3:30PM
More short talks
Jim Hendler (Brown), an overview of things going on at Brown, and in the
works.  Natural language (story comprehension).  FRAIL (frame-based
knowledge representation).  NASL (problem solving).  An electronic
repair manual, which generates instructions for repairs as needed from
an internal model, hooked up with a graphics and 3D modelling system.
And in the works: expert systems, probabilistic reasoning, logic programming,
problem solving, parallel computation (in particular marker-passing and
BOLTZMANN-style machines).  Brown is looking for a new AI faculty member.
[Not a job ad, just a report of one!]
David Miller (Yale), "Uncertain planning through uncertain territory".
How to get from A to B if your controls and sensors are unreliable.
Find a path to your goal, along the path select checkpoints (landmarks),
adjust the path to go within eye-shot of the checkpoints, then off you go,
running demons to watch out for checkpoints and raise alarms if they don't
appear when expected.  This means you're lost.  Then you generate hypotheses
about where you are now (using your map), and what might have gone wrong to
get you there (based on a self-model).  Verify one (some? all?) of these
hypotheses by looking around.  Patch your plan to get back to an appro-
priate checkpoint.  Verify the whole process by getting back on the beaten
track.  Apparently there's a real Hero robot that cruises about a room
doing this.
Bud Crawley (GTE) described what was going on at GTE Labs in AI.  Know-
ledge-based systems.  Natural-language front-end for data bases.
Distributed intelligence.  Machine learning.
Bill Taylor (Gould Inc.), gave an idea of what applied AI research means
to his company, which (in his division) makes digital controllers for
running machines out on the factory floor.  Currently, an expert system
for repairing these controllers in the field.  [I'm not sure how far along
in being realized this was, I think very little.]  For the future, a big,
smart system that would assist a human operator in managing the hundreds
of such controllers out on the floor of a decent sized factory.
Graeme Hirst (Brown, soon Toronto), "Artificial Digestion".  Artificial
Intelligence attempts to model a very poorly understood system, the human
cognitive system.  Much more immediate and substantial results could be
obtained by modelling a much better understood system, the human digestive
system.  Examples of the behavior of a working prototype system on simulated
food input, drawn from a number of illustrative food-domains, including
a four-star French restaurant and a garbage pail.  Applications of AD:
automatic restaurant reviewing, automatic test-marketing of new food
products, and vicarious eating for the diet-conscious and orally impaired.
[Forget about expert systems; this is the hot new area for the 80's!]

4:30PM
AISNE Business Meeting (Yes, some of us stayed till the end!)
Next year's meeting will held at Boston University.  The position of
programme chairman is still open.


A Final Remark:
All the above is based on my own notes of the conference.  At the very
least it reflects my own interests and pre-occupations.  Considering
the disorganized state of my notes, and the late hour I'm typing this,
a lot of the above may be just wrong.  My apologies to anyone I've
misrepresented; by all means correct me.  I hope the general interest of
this report to the AI community outweighs all these failings.  LJK

===========================================================================

------------------------------

End of AIList Digest
********************

∂14-Nov-83  1831	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #96
Received: from SRI-AI by SU-AI with TCP/SMTP; 14 Nov 83  18:29:11 PST
Date: Monday, November 14, 1983 8:48AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #96
To: AIList@SRI-AI


AIList Digest            Monday, 14 Nov 1983       Volume 1 : Issue 96

Today's Topics:
  Theory - Parallel Systems,
  Looping Problem in Literature,
  Intelligence
----------------------------------------------------------------------

Date: 8 Nov 83 23:03:04-PST (Tue)
From: pur-ee!uiucdcs!uokvax!andree @ Ucb-Vax
Subject: Re: Infinite loops and Turing machines.. - (nf)
Article-I.D.: uiucdcs.3712

/***** uokvax:net.ai / umcp-cs!speaker /  9:41 pm  Nov  1, 1983 */
Aha!  I knew someone would come up with this one!
Consider that when we talk of simultaneous events... we speak of
simultaneous events that occur within one Turing machine state
and outside of the Turing machine itself.  Can a one-tape
Turing machine read the input of 7 discrete sources at once?
A 7 tape machine with 7 heads could!
/* ---------- */

But I can do it with a one-tape, one-head turing machine. Let's assume
that each of your 7 discrete sources can always be represeted in n bits.
Thus, the total state of all seven sources can be represented in 7*n bits.
My one-tape turing machine has 2 ** (7*n) symbols, so it can handle your
7 sources, each possible state of all 7 being one symbol of input.

One of the things I did in an undergraduate theory course was show that
an n-symbol turing machine is no more powerful than a two-symbol turing
machine for any finite (countable?) n. You just loose speed.

        <mike

------------------------------

Date: Friday, 11 November 1983, 14:54-EST
From: Carl Hewitt <HEWITT at MIT-AI>
Subject: parallel vs. sequential

An excellent treatise on how some parallel machines are more powerful
than all sequential machines can be found in Will Clinger's doctoral
dissertation "Foundations of Actor Semantics" which can be obtained by
sending $7 to

Publications Office
MIT Artificial Intelligence Laboratory
545 Technology Square
Cambridge, Mass. 02139

requesting Technical Report 633 dated May 1981.

------------------------------

Date: Fri 11 Nov 83 17:12:08-PST
From: Wilkins  <WILKINS@SRI-AI.ARPA>
Subject: parallelism and turing machines


Regarding the "argument" that parallel algorithms cannot be run serially
because a Turing machine cannot react to things that happen faster than
the time it needs to change states:
clearly, you need to go back to whoever sold you the Turing machine
for this purpose and get a turbocharger for it.

Seriously, I second the motion to move towards more useful discussions.

------------------------------

Date: 9 Nov 83 19:28:21-PST (Wed)
From: ihnp4!cbosgd!mhuxl!ulysses!unc!mcnc!ncsu!uvacs!mac @ Ucb-Vax
Subject: the halting problem in history
Article-I.D.: uvacs.1048


   If there were any 'subroutines' in the brain that could not
   halt... I'm sure they would have been found and bred out of
   the species long ago.  I have yet to see anyone die from
   an infinite loop. (umcp-cs.3451)

There is such.  It is caused by seeing an object called the Zahir.  One was
a Persian astrolabe, which was cast into the sea lest men forget the world.
Another was a certain tiger.  Around 1900 it was a coin in Buenos Aires.
Details in "The Zahir", J.L.Borges.

------------------------------

Date: 8 Nov 83 16:38:29-PST (Tue)
From: decvax!wivax!linus!vaxine!wjh12!foxvax1!brunix!rayssd!asa @ Ucb-Vax
Subject: Re: Inscrutable Intelligence
Article-I.D.: rayssd.233

The problem with a psychological definition of intelligence is in finding
some way to make it different from what animals do, and cover all of the
complex things that huumans can do. It used to be measured by written
test. This was grossly unfair, so visual tests were added. These tend to
be grossly unfair because of cultural bias. Dolphins can do very
"intelligent" things, based on types of "intelligent behavior". The best
definition might be based on the rate at which learning occurs, as some
have suggested, but that is also an oversimplification. The ability to
deduce cause and effect, and to predict effects is obviously also
important. My own feeling is that it has something to do with the ability
to build a model of yourself and modify yourself accordingly. It may
be that "I conceive" (not "I think"), or "I conceive and act", or "I
conceive of conceiving" may be as close as we can get.

------------------------------

Date: 8 Nov 83 23:02:53-PST (Tue)
From: pur-ee!uiucdcs!uokvax!rigney @ Ucb-Vax
Subject: Re: Parallelism & Consciousness - (nf)
Article-I.D.: uiucdcs.3711

Perhaps something on the order of "Intelligence enhances survivability
through modification of the environment" is in order.  By modification
something other than the mere changes brought about by living is indicated
(i.e. Rise in CO2 levels, etc. doesn't count).

Thus, if Turtles were intelligent, they would kill the baby rabbits, but
they would also attempt to modify the highway to present less of a hazard.

Problems with this viewpoint:

        1) It may be confusing Technology with Intelligence.  Still, tool
        making ability has always been a good sign.

        2) Making the distinction between Intelligent modifications and
        the effect of just being there.  Since "conscious modification"
        lands us in a bigger pit of worms than we're in now, perhaps a
        distinction should be drawn between reactive behavior (reacting
        and/or adapting to changes) and active behavior (initiating
        changes).  Initiative is therefore a factor.

        3) Monkeys make tools(Antsticks), Dolphins don't.  Is this an
        indication of intelligence, or just a side-effect of Monkeys
        having hands and Dolphins not?  In other words, does Intelligence
        go away if the organism doesn't have the means of modifying
        its environment?  Perhaps "potential" ability qualifies.  Or
        we shouldn't consider specific instances (Is a man trapped in
        a desert still intelligent, even if he has no way to modify
        his environment.)
           Does this mean that if you had a computer with AI, and
        stripped its peripherals, it would lose intelligence?  Are
        human autistics intelligent?  Or are we only considering
        species, and not representatives of species?

In the hopes that this has added fuel to the discussion,

                Carl
                ..!ctvax!uokvax!rigney
                ..!duke!uok!uokvax!rigney

------------------------------

Date: 8 Nov 83 20:51:15-PST (Tue)
From: pur-ee!uiucdcs!uicsl!dinitz @ Ucb-Vax
Subject: Re: RE:intelligence and adaptability - (nf)
Article-I.D.: uiucdcs.3746

Actually, SHRDLU had neither hand nor eye -- only simulations of them.
That's a far cry from the real thing.

------------------------------

Date: 9 Nov 83 16:20:10-PST (Wed)
From: ihnp4!houxm!mhuxl!ulysses!unc!mcnc!ncsu!uvacs!mac @ Ucb-Vax
Subject: inscrutable intelligence
Article-I.D.: uvacs.1047


Regarding inscrutability of intelligence [sri-arpa.13363]:

Actually, it's typical that a discipline can't define its basic object of
study.  Ever heard a satisfactory definition of mathematics (it's not just
the consequences of set theory) or philosophy.?  What is physics?

Disciplines are distinguished from each other for historical and
methodological reasons.  When they can define their subject precisely it is
because they have been superseded by the discipline that defines their
terms.

It's usually not important (or possible) to define e.g. intelligence
precisely.  We know it in humans.  This is where the IQ tests run into
trouble.  AI seems to be about behavior in computers that would be called
intelligent in humans.  Whether the machines are or are not intelligent
(or, for that matter, conscious) is of little interest and no import.  In
this I guess I agree with Rorty [sri-arpa.13322].  Rorty is willing to
grant consciousness to thermostats if it's of any help.

(Best definition of formal mathematics I know: "The science where you don't
know what you're talking about or whether what you're saying is true".)

                        A. Colvin
                        mac@virginia

------------------------------

Date: 12 Nov 83 0:37:48-PST (Sat)
From: decvax!genrad!security!linus!utzoo!utcsstat!laura @ Ucb-Vax
Subject: Re: Parallelism & Consciousness - (nf)
Article-I.D.: utcsstat.1420

        The other problem with the "turtles should be killing baby
rabbits" definition of intelligence is that it seems to imply that
killing (or at least surviving) is an indication of intelligence.
i would rather not believe this, unless there is compelling evidence
that the 2 are related. So far I have not seen the evidence.

Laura Creighton
utcsstat!laura

------------------------------

Date: 20 Nov 83 0:24:46-EST (Sun)
From: pur-ee!uiucdcs!trsvax!karl @ Ucb-Vax
Subject: Re: Slow Intelligence - (nf)
Article-I.D.: uiucdcs.3789



     " ....  I'm not at all sure that people's working definition
     of  intelligence  has anything at all to do with either time
     or survival.  "

                    Glenn Reid

I'm not sure that people's working definition of intelligence has
anything at all to do with ANYTHING AT ALL.  The quoted statement
implies that peoples' working definition of intelligence is  different
-  it  is  subjective  to each individual.  I would like to claim
that each individual's working definition of intelligence is sub-
ject to change also.


What we are working with here is conceptual.. not a tangible  ob-
ject  which we can spot at an instance.  If the object is concep-
tual, and therefore subjective, then it seems that  we  can  (and
probably  will)  change it's definition as our collective experi-
ences teach us differently.


                                        Karl T. Braun
                                        ...ctvax!trsvax!karl

------------------------------

End of AIList Digest
********************

∂14-Nov-83  1702	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #97
Received: from SRI-AI by SU-AI with TCP/SMTP; 14 Nov 83  16:59:41 PST
Date: Monday, November 14, 1983 8:59AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #97
To: AIList@SRI-AI


AIList Digest            Monday, 14 Nov 1983       Volume 1 : Issue 97

Today's Topics:
  Pattern Recognition - Vector Fields,
  Psychology - Defense,
  Ethics - AI Responsibilities,
  Seminars - NRL & Logic Specifications & Deductive Belief
----------------------------------------------------------------------
			
Date: Sun, 13 Nov 83 19:25:40 PST
From: Philip Kahn <v.kahn@UCLA-LOCUS>
Subject: Need references in field of spatial pattern recognition

        This letter to AI-LIST is a request for references from all
of you out there that are heavily into spatial pattern recognition.

        First let me explain my approach, then I'll hit you with my
request.  Optical flow and linear contrast edges have been getting a
lot of attention recently.  Utilizing this approach, I view a line
as an ordered set of [image] elements; that is, a line is comprised of a
finite ordered set of elements.  Each element of a line is treated
as a directed line (a vector with direction and magnitude).

        Here's what I am trying to define:  with such a definition
of a line, it should be possible to create mappings between lines
to form fairly abstract ideas of similarity between lines.  Since
objects are viewed as a particular arrangement of lines, this analysis
would suffice in identifying objects as being alike.  Some examples,
the two lines possessing the most similarities (i.e.,
MAX ( LINE1 .intersection. LINE2 ) ) may be one criterion of comparison.

        I'm looking for any references you might have on this area.
This INCLUDES:
        1) physiology/biology/neuroanatomy articals dealing with
           functional mappings from the ganglion to any level of
           cortical processing.
        2) fuzzy set theory.  This includes ordered set theory and
           any and all applications of set theory to pattern recognition.
        3) any other pertinent references

        I would greatly appreciate any references you might provide.
After a week or two, I will compile the references and put them
on the AI-LIST so that we all can use them.

                Viva la effort!
                Philip Kahn


[My correspondence with Philip indicates that he is already familiar
with much of the recent literature on optic flow.  He has found little,
however, on the subject of pattern recognition in vector fields.  Can
anyone help? -- KIL]

------------------------------

Date: Sun, 13 Nov 1983  22:42 EST
From: Montalvo%MIT-OZ@MIT-MC.ARPA
Subject: Rational Psychology [and Reply]

    Date: 28 Sep 83 10:32:35-PDT (Wed)
    To: AIList at MIT-MC
    From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax
    Subject: RE: Rational Psychology [and Reply]

    ... Is psychology rational?
    Someone said that all sciences are rational, a moot point, but not that
    relevant unless one wishes to consider Psychology a science.  I do not.
    This does not mean that psychologists are in any way inferior to chemists
    or to REAL scientists like those who study physics.  But I do think there
    ....

    ----GaryFostel----


This is an old submission, but having just read it I felt compelled to
reply.  I happen to be a Computer Scientist, but I think
Psychologists, especially Experimental Psychologists, are better
scientists than the average Computer "Scientist".  At least they have
been trained in the scientific method, a skill most Computer
Scientists lack.  Just because Psychologist, by and large, cannot defend
themselves on this list is no reason to make idle attacks with only
very superficial knowledge on the subject.

Fanya Montalvo

------------------------------

Date: Sun 13 Nov 83 13:14:06-PST
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: just a reminder...

Artificial intelligence promises to alter the world in enormous ways during our
lifetime;  I  believe it's crucial for all of us to look forward to the effects
our our work, both individually and collectively, to make sure that it will  be
to the benefit of all peoples in the world.

It  seems to be tiresome to people to remind them of the incredible effect that
AI will have in our lifetimes, yet the profound mature of the  changes  to  the
world made by a small group of researchers makes it crucial that we don't treat
our  efforts  casually. For example, the military applications of AI will dwarf
that of the atomic bomb, but even more important is the fact  that  the  atomic
bomb is a primarily military device, while AI will impact the world as much (if
not more) in non-military domains.

Physics in the early part of this century was at the cutting edge of knowledge,
similar to the current place of AI. The culmination of their work in the atomic
bomb  changed  their field immensely and irrevocably; even on a personal level,
researchers in physics found their lives  greatly  impacted,  often  shattered.
Many of the top researchers left the field.

During  our  lifetimes  I  think we will see a similar transformation, with the
"fun and games" of these heady years turning into a deadly seriousness, I think
we will also see top researchers leaving the field, once we start to  see  some
of  our effects on the world. It is imperative for all workers in this field to
formulate and share a moral outlook on what we do,  and  hope  to  do,  to  the
world.

I would suggest we have, at the minimum, a three part responsibility. First, we
must  make ourselves aware of the human impact of our work, both short and long
term. Second, we must use this knowledge to guide the course of  our  research,
both  individually  and  collectively, rather than simply flowing into whatever
area the grants are flowing into.  Third  and  most  importantly,  we  must  be
spokespeople  and  consciences  to  the world, forcing others to be informed of
what we are doing and its effects.  Researches who still cling to  "value-free"
science should not be working in AI.

I will suggest a few areas we should be thinking about:

-  Use of AI for offensive military use vs. legitimate defense needs. While the
line is vague, a good offense is surely not always the best defense.

- Will the work cause a centralization of power, or cause a decentralization of
power? Building massive centers of power in this  age  increases  the  risk  of
humans dominated by machine.

- Is the work offering tools to extend the grasp of humans, or tools to control
humans?

- Will people have access to the information generated by the work, or will the
benefits of information access be restricted to a few?

Finally,  will  the work add insights into ourselves a human beings, or will it
simply feed our drives, reflecting our base nature back at  ourselves?  In  the
movie  "Tron"  an  actor  says "Our spirit remains in each and every program we
wrote"; what IS our spirit?

David

------------------------------

Date: 8 Nov 1983 09:44:28-PST
From: Elaine Marsh <marsh@NRL-AIC>
Subject: AI Seminar Schedule

[I am passing this along because it is the first mention of this seminar
series in AIList and will give interested readers the chance to sign up
for the mailing list.  I will not continue to carry these seminar notices
because they do not include abstracts.  -- KIL]


                     U.S. Navy Center for Applied Research
                           in Artificial Intelligence
                     Naval Research Laboratory - Code 7510
                           Washington, DC   20375

                              WEEKLY SEMINAR SERIES

        14 Nov.  1983     Dr. Jagdish Chandra, Director
                          Mathematical Sciences Division
                          Army Research Office, Durham, NC
                                "Mathematical Sciences Activities Relating
                                 to AI and Its Applications at the Army
                                 Research Office"

        21 Nov.  1983     Professor Laveen Kanal
                          Department of Computer Science
                          University of Maryland, College Park, MD
                                "New Insights into Relationships among
                                 Heuristic Search, Dynamic Programming,
                                 and Branch & Bound Procedures"

        28 Nov.  1983     Dr. William Gale
                          Bell Labs
                          Murray Hill, NJ
                                "An Expert System for Regression
                                 Analysis: Applying A.I. Ideas in
                                 Statistics"

         5 Dec.  1983     Professor Ronald Cole
                          Department of Computer Science
                          Carnegie-Mellon University, Pittsburgh, PA
                                "What's New in Speech Recognition?"

        12 Dec.  1983     Professor Robert Haralick
                          Department of Electrical Engineering
                          Virginia Polytechnic Institute, Blacksburg, VA
                                "Application of AI Techniques to the
                                 Interpretation of LANDSAT Scenes over
                                 Mountainous Areas"

   Our meeting are usually held Monday mornings at 10:00 a.m. in the
   Conference Room of the Navy Center for Applied Research in Artificial
   Intelligence (Bldg. 256) located on Bolling Air Force Base, off I-295,
   in the South East quadrant of Washington, DC.

   Coffee will be available starting at 9:45 a.m.

   If you would like to speak, or be added to our mailing list, or would
   just like more information contact Elaine Marsh at marsh@nrl-aic
                                                     [(202)767-2382].

------------------------------

Date: Mon 7 Nov 83 15:20:15-PST
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Ph.D. Oral

                [Reprinted from the SU-SCORE bboard.]


                                  Ph.D. Oral
          COMPILING LOGIC SPECIFICATIONS FOR PROGRAMMING ENVIRONMENTS
                               November 16, 1983
                      2:30 p.m., Location to be announced
                              Stephen J. Westfold


A major problem in building large programming systems is in keeping track of
the numerous details concerning consistency relations between objects in the
domain of the system.  The approach taken in this thesis is to encourage the
user to specify a system using very-high-level, well-factored logic
descriptions of the domain, and have the system compile these into efficient
procedures that automatically maintain the relations described.  The approach
is demonstrated by using it in the programming environment of the CHI
Knowledge-based Programming system.  Its uses include describing and
implementing the database manager, the dataflow analyzer, the project
management component and the system's compiler itself.  It is particularly
convenient for developing knowledge representation schemes, for example for
such things as property inheritance and automatic maintenance of inverse
property links.

The problem description using logic assertions is treated as a program such as
in PROLOG except that there is a separation of the assertions that describe the
problem from assertions that describe how they are to be used.  This
factorization allows the use of more general logical forms than Horn clauses as
well as encouraging the user to think separately about the problem and the
implementation.  The use of logic assertions is specified at a level natural to
the user, describing implementation issues such as whether relations are stored
or computed, that some assertions should be used to compute a certain function,
that others should be treated as constraints to maintain the consistency of
several interdependent stored relations, and whether assertions should be used
at compile- or execution-time.

Compilation consists of using assertions to instantiate particular procedural
rule schemas, each one of which corresponds to a specialized deduction, and
then compiling the resulting rules to LISP.  The rule language is a convenient
intermediate between the logic assertion language and the implementation
language in that it has both a logic interpretation and a well-defined
procedural interpretation.  Most of the optimization is done at the logic
level.

------------------------------

Date: Fri 11 Nov 83 09:56:17-PST
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Ph.D. Oral

                [Reprinted from the SU-SCORE bboard.]

                                  Ph.D. Oral

                       Tuesday, Nov. 15, 1983, 2:30 p.m.

                  Bldg. 170 (history corner), conference room

                          A DEDUCTIVE MODEL OF BELIEF

                                 Kurt Konolige


Reasoning about knowledge and belief of computer and human agents is assuming
increasing importance in Artificial Intelligence systems in the areas of
natural language understanding, planning, and knowledge  representation in
general.  Current formal models of belief that form the basis for most of these
systems are derivatives of possible- world semantics for belief.  However,,
this model suffers from epistemological and heuristic inadequacies.
Epistemologically, it assumes that agents know all the consequences of their
belief.  This assumption is clearly inaccurate, because it doesn't take into
account resource limitations on an agent's reasoning ability.  For example, if
an agent knows the rules of chess, it then follows in the possible- world model
that he knows whether white has a winning strategy or not.  On the heuristic
side, proposed mechanical deduction procedures have been first-order
axiomatizations of the possible-world belief.

A more natural model of belief is a deductions model:  an agent has a set of
initial beliefs about the world in some internal language, and a deduction
process for deriving some (but not necessarily all)  logical consequences of
these beliefs.  Within this model, it is possible to account for resource
limitations of an agent's deduction process; for example, one can model a
situation in which an agent knows the rules of chess but does not have the
computational  resources to search the complete game tree before making a move.

This thesis is an investigation of Gentzen-type formalization of the deductive
model of belief.  Several important original results are  proven.  Among these
are soundness and completeness theorems for a deductive belief logic; a
corespondence result that shows the possible- worlds model is a special case of
the deduction model; and a model analog ot Herbrand's Theorem for the belief
logic. Several other topics of knowledge and belief are explored in the thesis
from the viewpoint of the deduction model, including a theory of introspection
about self-beliefs, and a theory of circumscriptive ignorance, in which facts
an agent doesn't know are formalized by limiting or circumscribing the
information available to him.  Here it is!

------------------------------

End of AIList Digest
********************

∂15-Nov-83  1838	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #98
Received: from SRI-AI by SU-AI with TCP/SMTP; 15 Nov 83  18:37:28 PST
Date: Tuesday, November 15, 1983 10:21AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #98
To: AIList@SRI-AI


AIList Digest            Tuesday, 15 Nov 1983      Volume 1 : Issue 98

Today's Topics:
  Intelligence - Definitions & Metadiscussion,
  Looping Problem,
  Architecture - Parallelism vs. Novel Architecture,
  Pattern Recognition - Optic Flow & Forced Matching,
  Ethics & AI,
  Review - Biography of Turing
----------------------------------------------------------------------

Date: 14 Nov 1983 15:03-PST
From: fc%usc-cse%USC-ECL@SRI-NIC
Subject: Re: AIList Digest   V1 #96

An intelligent race is one with a winner, not one that keeps on
rehashing the first 5 yards till nobody wants to watch it anymore.
        FC

------------------------------

Date: 14 Nov 83 10:22:29-PST (Mon)
From: ihnp4!houxm!mhuxl!ulysses!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: Intelligence and Killing
Article-I.D.: ncsu.2396


    Someone wondered if there was evidence that intelligence was related to
    the killing off of other animals.  Presumably that person is prepared to
    refute the apparant similtaneous claims of man as the most intelligent
    and the most deadly animal.   Personally, I might vote dolphins as more
    intelligent, but I bet they do their share of killing too.  They eat things.
    ----GaryFostel----

------------------------------

Date: 14 Nov 83 14:01:55-PST (Mon)
From: ihnp4!ihuxv!portegys @ Ucb-Vax
Subject: Behavioristic definition of intelligence
Article-I.D.: ihuxv.584

What is the purpose of knowing whether something is
intelligent?  Or has a soul?  Or has consciousness?

I think one of the reasons is that it makes it easier to
deal with it.  If a creature is understood to be a human
being, we all know something about how to behave toward it.
And if a machine exhibits intelligence, the quintessential
quality of human beings, we also will know what to do.

One of the things that this implies is that we really should
not worry too much about whether a machine is intelligent
until one gets here.  The definition of it will be in part
determined by how we behave toward it. Right now, I don't feel
very confused about how to act in the presence of a computer
running an AI program.

           Tom Portegys, Bell Labs IH, ihuxv!portegys

------------------------------

Date: 12 Nov 83 19:38:02-PST (Sat)
From: decvax!decwrl!flairvax!kissell @ Ucb-Vax
Subject: Re: the halting problem in history
Article-I.D.: flairvax.267

"...If there were any subroutines in the brain that did not halt..."

It seems to me that there are likely large numbers of subroutines in the
brain that aren't *supposed* to halt.  Like breathing.  Nothing wrong with
that; the brain is not a metaphor for a single-instruction-stream
processor.  I've often suspected, though, that some pathological states,
depression, obsession, addiction, etcetera can be modeled as infinite
loops "executed" by a portion of the brain, and thus why "shock" treatments
sometimes have beneficial effects on depression; a brutal "reset" of the
whole "system".

------------------------------

Date: Tue, 15 Nov 83 07:58 PST
From: "Glasser Alan"@LLL-MFE.ARPA
Subject: parallelism vs. novel architecture

There has been a lot of discussion in this group recently about the
role of parallelism in artificial intelligence.  If I'm not mistaken,
this discussion began in response to a message I sent in, reviving a
discussion of a year ago in Human-Nets.  My original message raised
the question of whether there might exist some crucial, hidden,
architectural mechanism, analogous to DNA in genetics, which would
greatly clarify the workings of intelligence.  Recent discussions
have centered on the role of parallelism alone.  I think this misses
the point.  While parallelism can certainly speed things up, it is
not the kind of fundamental departure from past practices which I
had in mind.  Perhaps a better example would be Turing's and von
Neumann's concept of the stored-program computer, replacing earlier
attempts at hard-wired computers.  This was a fundamental break-
through, without which nothing like today's computers could be
practical.  Perhaps true intelligence, of the biological sort,
requires some structural mechanism which has yet to be imagined.
While it's true that a serial Turing machine can do anything in
principle, it may be thoroughly impractical to program it to be
truly intelligent, both because of problems of speed and because of
the basic awkwardness of the architecture.  What is hopelessly
cumbersome in this architecture may be trivial in the right one.  I
know this sounds pretty vague, but I don't think it's meaningless.

------------------------------

Date: Mon 14 Nov 83 17:59:07-PST
From: David E.T. Foulser <FOULSER@SU-SCORE.ARPA>
Subject: Re: AIList Digest   V1 #97

There is a paper by Kruskal on multi-dimensional scaling that might be of
interest to the user interested in vision processing. I'm not too clear on
what he's doing, so this could be off-base.

                                Dave Foulser

------------------------------

Date: Mon 14 Nov 83 22:24:45-MST
From: Stanley T. Shebs <SHEBS@UTAH-20.ARPA>
Subject: Pattern Matchers

Thanks for the replies about loop detection; some food for thought
in there...

My next puzzle is about pattern matchers.  Has anyone looked carefully
at the notion of a "non-failing" pattern matcher?  By that I mean one
that never or almost never rejects things as non-matching.  Consider
a database of assertions (or whatever) and the matcher as a search
function which takes a pattern as argument.  If something in the db
matches the pattern, then it is returned.  At this point, the caller
can either accept or reject the item from the db.  If rejected, the
matcher would be called again, to find something else matching, and
so forth.  So far nothing unusual.  The matcher will eventually
signal utter failure, and that there is nothing satisfactory in the
database.  My idea is to have the matcher constructed in such a way
that it will return things until the database is entirely scanned, even
if the given pattern is a very simple and rigid one.  In other words,
the matcher never gives up - it will always try to find the most
tenuous excuse to return a match.

Applications I have in mind: NLP for garbled and/or incomplete sentences,
and creative thinking (what does a snake with a tail in its mouth
have to do with benzene? talk about tenuous connections!).

The idea seems related to fuzzy logic (an area I am sadly ignorant
of), but other than that, there seems to be no work on the idea
(perhaps it's a stupid one?).  There seem to be two main problems -
organizing the database in such a way that the matcher can easily
progress from exact matches to extremely remote ones (can almost
talk about a metric space of assertions!),  and setting up the
matcher's caller so as not to thrash too badly (example: a parser
may have trouble deciding whether a sentence is grammatically
incorrect or a word's misspelling looks like another word,
if the word analyzer has a nonfailing matcher).

Does anybody know anything about this?  Is there a fatal flaw
somewhere?

                                                Stan Shebs

BTW, a frame-based system can be characterized as a semantic net
(if you're willing to mung concepts!), and a semantic net can
be mapped into an undirected graph, which *is* a metric space.

------------------------------

Date: 14 November 1983 1359-PST (Monday)
From: crummer at AEROSPACE (Charlie Crummer)
Subject: Ethics and AI Research

Dave Rogers brought up the subject of ethics in AI research. I agree with him
that we must continually evaluate the projects we are asked to work on.
Unfortunately, like the example he gave of physicists working on the bombs,
we will not always know what the government has in mind for our work. It may
be valid to indict the workers on the Manhattan project because they really
did have an idea what was going on but the very early researchers in the
field of radioactivity probably did not know how their discoveries would be
used.

The application of morality must go beyond passively choosing not to
work on certain projects. We must become actively involved in the
application by our government of the ideas we create. Once an idea or
physical effect is discovered it can never be undiscovered.  If I
choose not to work on a project (which I definitely would if I thought
it immoral) that may not make much difference. Someone else will
always be waiting to pick up the work. It is sort of like preventing
rape by refusing to rape anyone.

  --Charlie

------------------------------

Date: 14 Nov 83  1306 PST
From: Russell Greiner <RDG@SU-AI>
Subject: Biography of Turing

                [Reprinted from the SU-SCORE bboard.]

n055  1247  09 Nov 83
BC-BOOK-REVIEW (UNDATED)
By CHRISTOPHER LEHMANN-HAUPT
c. 1983 N.Y. Times News Service
ALAN TURING: The Enigma. By Andrew Hodges. 587 pages.
Illustrated. Simon & Schuster. $22.50.

    He is remembered variously as the British cryptologist whose
so-called ''Enigma'' machine helped to decipher Germany's top-secret
World War II code; as the difficult man who both pioneered and
impeded the advance of England's computer industry; and as the
inventor of a theoretical automaton sometimes called the ''Turing
(Editors: umlaut over the u) Machine,'' the umlaut being, according
to a glossary published in 1953, ''an unearned and undesirable
addition, due, presumably, to an impression that anything so
incomprehensible must be Teutonic.''
    But this passionately exhaustive biography by Andrew Hodges, an
English mathematician, brings Alan Turing very much back to life and
offers a less forbidding impression. Look at any of the many verbal
snapshots that Hodges offers us in his book - Turing as an
eccentrically unruly child who could keep neither his buttons aligned
nor the ink in his pen, and who answered his father when asked if he
would be good, ''Yes, but sometimes I shall forget!''; or Turing as
an intense young man with a breathless high-pitched voice and a
hiccuppy laugh - and it is difficult to think of him as a dark
umlauted enigma.
    Yet the mind of the man was an awesome force. By the time he was 24
years old, in 1936, he had conceived as a mathematical abstraction
his computing machine and completed the paper ''Computable Numbers,''
which offered it to the world. Thereafter, Hodges points out, his
waves of inspiration seemed to flow in five-year intervals - the
Naval Enigma in 1940, the design for his Automatic Computing Engine
(ACE) in 1945, a theory of structural evolution, or morphogenesis, in
1950. In 1951, he was elected a Fellow of the Royal Society. He was
not yet 40.
    But the next half-decade interval did not bring further revelation.
In February 1952, he was arrested, tried, convicted and given a
probationary sentence for ''Gross Indecency contrary to Section 11 of
the Criminal Law Amendment Act 1885,'' or the practice of male
homosexuality, a ''tendency'' he had never denied and in recent years
had admitted quite openly. On June 7, 1954, he was found dead in his
home near Manchester, a bitten, presumably cyanide-laced apple in his
hand.
    Yet he had not been despondent over his legal problems. He was not
in disgrace or financial difficulty. He had plans and ideas; his work
was going well. His devoted mother - about whom he had of late been
having surprisingly (to him) hostile dreams as the result of a
Jungian psychoanalysis - insisted that his death was the accident she
had long feared he would suffer from working with dangerous
chemicals. The enigma of Alan Mathison Turing began to grow.
    Andrew Hodges is good at explaining Turing's difficult ideas,
particularly the evolution of his theoretical computer and the
function of his Enigma machines. He is adept at showing us the
originality of Turing's mind, especially the passion for truth (even
when it damaged his career) and the insistence on bridging the worlds
of the theoretical and practical. The only sections of the biography
that grow tedious are those that describe the debates over artificial
intelligence - or maybe it's the world's resistance to artificial
intelligence that is tedious. Turing's position was straightforward
enough: ''The original question, 'Can machines think?' I believe to
be too meaningless to deserve discussion. Nevertheless I believe that
at the end of the century the use of words and general educated
opinion will have altered so much that one will be able to speak of
machines thinking without expecting to be contradicted.''
    On the matter of Turing's suicide, Hodges concedes its
incomprehensibility, but then announces with sudden melodrama: ''The
board was ready for an end game different from that of Lewis
Carroll's, in which Alice captured the Red Queen, and awoke from
nightmare. In real life, the Red Queen had escaped to feasting and
fun in Moscow. The White Queen would be saved, and Alan Turing
sacrificed.''
    What does Hodges mean by his portentous reference to cold-war
politics? Was Alan Turing a murdered spy? Was he a spy? Was he the
victim of some sort of double-cross? No, he was none of the above:
the author is merely speculating that as the cold war heated up, it
must have become extremely dangerous to be a homosexual in possession
of state secrets. Hodges is passionate on the subject of the
precariousness of being homosexual; it was partly his participation
in the ''gay liberation'' movement that got him interested in Alan
Turing in the first place.
    Indeed, one has to suspect Hodges of an overidentification with Alan
Turing, for he goes on at far too great length on Turing's
existential vulnerability. Still, word by word and sentence by
sentence, he can be exceedingly eloquent on his subject. ''He had
clung to the simple amidst the distracting and frightening complexity
of the world,'' the author writes of Turing's affinity for the
concrete.
    ''Yet he was not a narrow man,'' Hodges continues. ''Mrs. Turing was
right in saying, as she did, that he died while working on a
dangerous experiment. It was the experiment called LIFE - a subject
largely inducing as much fear and embarrassment for the official
scientific world as for her. He had not only thought freely, as best
he could, but had eaten of two forbidden fruits, those of the world
and of the flesh. They violently disagreed with each other, and in
that disagreement lay the final unsolvable problem.''

------------------------------

End of AIList Digest
********************

∂16-Nov-83  1906	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #99
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Nov 83  19:05:49 PST
Date: Wednesday, November 16, 1983 2:25PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #99
To: AIList@SRI-AI


AIList Digest           Thursday, 17 Nov 1983      Volume 1 : Issue 99

Today's Topics:
  AI Literature - Comtex,
  Review - Abacus,
  Artificial Humanity,
  Conference - SPIE Call for Papers,
  Seminar - CRITTER for Critiquing Circuit Designs,
  Military AI - DARPA Plans (long message)
----------------------------------------------------------------------

Date: Wed 16 Nov 83 10:14:02-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Comtex

The Comtex microfiche series seems to be alive and well, contrary
to a rumor printed in an early AIList issue.  The ad they sent me
offers the Stanford and MIT AI memoranda (over $2,000 each set), and
says that the Purdue PRIP [pattern recognition and image processing]
technical reports will be next.  Also forthcoming are the SRI and
Carnegie-Mellon AI reports.

                                        -- Ken Laws

------------------------------

Date: Wed 16 Nov 83 10:31:26-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Abacus

I have the first issue of Abacus, the new "soft" computer science
magazine edited by Anthony Ralston.  It contains a very nice survey or
introduction to computer graphics for digital filmmaking and an
interesting exploration of how the first electronic digital computer
came to be.  There is also a superficial article about computer vision
which fails to answer its title question, "Why Computers Can't See
(Yet)".  [It is possibly that I'm being overly harsh since this is my
own area of expertise.  My feeling, however, is that the question
cannot be answered by just pointing out that vision is difficult and
that we have dozens of different approaches, none of which works in
more than specialized cases.  An adequate answer requires a guess at
how it is that the human vision system can work in all cases, and why
we have not been able to duplicate it.]

The magazine also offers various computer-related departments,
notably those covering book reviews, the law, personal computing,
puzzles, and politics.  Humorous anecdotes are solicited for
filler material, a la Reader's Digest.  There is no AI-related
column at present.

The magazine has a "padded" feel, particularly since every ad save
one is by Springer-Verlag, the publisher.  They even ran out of
things to advertise and so repeated several full-page ads.  No doubt
this is a new-issue problem and will quickly disappear.  I wish
them well.

                                        -- Ken Laws

------------------------------

Date: 16 Nov 1983 10:21:32 EST (Wednesday)
From: Mark S. Day <mday@bbnccj>
Subject: Artificial Humanity

     From: ihnp4!ihuxv!portegys @ Ucb-Vax
     Subject: Behavioristic definition of intelligence

     What is the purpose of knowing whether something is
     intelligent?  Or has a soul?  Or has consciousness?

     I think one of the reasons is that it makes it easier to
     deal with it.  If a creature is understood to be a human
     being, we all know something about how to behave toward it.
     And if a machine exhibits intelligence, the quintessential
     quality of human beings, we also will know what to do.

Without wishing to flame or start a pointless philosophical
discussion, I do not consider intelligence to be the quintessential
quality of human beings.  Nor do I expect to behave in the same way
towards an artificially intelligent program as I would towards a
person.  Turing tests etc. notwithstanding, I think there is a
distinction between "artificial intelligence" and "artificial
humanity," and that by and large people are not striving to create
"artificial humanity."

------------------------------

Date: Wed 16 Nov 83 09:30:18-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Artificial Humanity

I attended a Stanford lecture by Doug Lenat on Tuesday.  He mentioned
three interesting bugs that developed in EURISKO, a self-monitoring
and self-modifying program.

One turned up when EURISKO erroneously claimed to have discovered a
new type of flip-flop.  The problem was traced to an array indexing
error.  EURISKO, realizing that it had never in its entire history
had a bounds error, had deleted the bounds-checking code.  The first
bounds error occurred soon after.

Another bug cropped up in the "credit assignment" rule base.  EURISKO
was claiming that a particular rule had been responsible for discovering
a great many other interesting rules.  It turned out that the gist of
the rule was "If the system discovers something interesting, attach my
name as the discoverer."

The third bug became evident when EURISKO halted at 4:00 one morning
waiting for an answer to a question.  The system was supposed to know
that questions were OK when a person was around, but not at night with
no people at hand.  People are represented in its knowledge base in the
same manner as any other object.  EURISKO wanted (i.e., had as a goal)
to ask a question.  It realized that the reason it could not was that
no object in its current environment had the "person" attribute.  It
therefore declared itself to be a "person", and proceeded to ask the
question.

Doug says that it was rather difficult to explain to the system why
these were not reasonable things to do.

                                        -- Ken Laws

------------------------------

Date: Wed 16 Nov 83 10:09:24-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: SPIE Call for Papers

SPIE has put out a call for papers for its Technical Symposium
East '84 in Arlington, April 29 - May 4.  One of the 10 subtopics
is Applications of AI, particularly image understanding, expert
systems, autonomous navigation, intelligent systems, computer
vision, knowledge-based systems, contextual scene analysis, and
robotics.

Abstracts are due Nov. 21, manuscripts by April 2.  For more info,
contact

  SPIE Technical Program Committee
  P.O. Box 10
  Bellingham, Washington  98227-0010

  (206) 676-3290, Technical Program Dept.
  Telex 46-7053

                                        -- Ken Laws

------------------------------

Date: 15 Nov 83 14:19:54 EST
From: Smadar <KEDAR-CABELLI@RUTGERS.ARPA>
Subject: An III talk this Thursday...

                 [Reprinted from the RUTGERS bboard.]

          Title:    CRITTER - A System for 'Critiquing' Circuits
          Speaker:  Van Kelly
          Date:     Thursday, November 17,1983, 1:30-2:30 PM
          Location: Hill Center, Seventh floor lounge

       Van  kelly,  a  Ph.D.  student  in  our  department, will describe a
    computer system, CRITTER, for  'critiquing'  digital  circuit  designs.
    This  informal  talk  based on his current thesis research.  Here is an
    abstract of the talk:

    CRITTER is  an  exploratory  prototype  design  aid  for  comprehensive
    "critiquing" of digital circuit designs.  While originally intended for
    verifying  a circuit's functional correctness and timing safety, it can
    also be used to  estimate  design  robustness,  sensitivity  to  device
    parameters,  and  (to some extent) testability.  CRITTER has been built
    using Artificial Intelligence ("Expert  Systems")  technology  and  its
    reasoning is guided by an extensible collection of electronic knowledge
    derived  from human experts.  Also, a new non-procedural representation
    for both the real-time behavior of circuits and circuit  specifications
    has  led  to a streamlined circuit modeling formalism based on ordinary
    mathematical function composition.   A  version  of  CRITTER  has  been
    tested  on  circuits  with  complexities  of  up to a dozen TTL SSI/MSI
    packages.  A more powerful version is  being  adapted  for  use  in  an
    automated VLSI design environment.

------------------------------

Date: 16 Nov 83 12:58:07 PST (Wednesday)
From: John Larson <JLarson.PA@PARC.ARPA>
Subject: AI and the military (long message)

Received over the network  . . .

STRATEGIC COMPUTING PLAN ANNOUNCED; REVOLUTIONARY ADVANCES
IN MACHINE INTELLIGENCE TECHNOLOGY TO MEET CRITICAL DEFENSE NEEDS

  Washington, D.C. (7 Nov. 1983) - - Revolutionary advances in the way
computers will be applied to tomorrow's national defense needs were
described in a comprehensive "Strategic Computing" plan announced
today by the Defense Advanced Research Projects Agency (DARPA).

  DARPA's plan encompasses the development and application of machine
intelligence technology to critical defense problems.  The program
calls for transcending today's computer capabilities by a "quantum
jump."  The powerful computers to be developed under the plan will be
driven by "expert systems" that mimic the thinking and reasoning
processes of humans. The machines will be equipped with sensory and
communication modules enabling them to hear, talk, see and act on
information and data they develop or receive.  This new technology as
it emerges during the coming decade will have unprecedented
capabilities and promises to greatly increase our national security.

  Computers are already widely employed in defense, and are relied on
to help hold the field against larger forces.  But current computers
have inflexible program logic, and are limited in their ability to
adapt to unanticipated enemy actions in the field.  This problem is
heightened by the increasing pace and complexity of modern warfare.
The new DARPA program will confront this challenge by producing
adaptive, intelligent computers specifically aimed at critical
military applications.

  Three initial applications are identified in the DARPA plan.  These
include autonomous vehicles (unmanned aircraft, submersibles, and land
vehicles), expert associates, and large-scale battle management
systems.

  In contrast with current guided missiles and munitions, the new
autonomous vehicles will be capable of complex, far-ranging
reconnaissance and attack missions, and will exhibit highly adaptive
forms of terminal homing.

  A land vehicle described in the plan will be able to navigate
cross-country from one location to another, planning its route from
digital terrain data, and updating its plan as its vision and image
understanding systems sense and resolve ambiguities between observed
and stored terrain data.  Its expert local-navigation system will
devise schemes to insure concealment and avoid obstacles as the
vehicle pursues its mission objectives.

  A pilot's expert associate will be developed that can interact via
speech communications and function as a "mechanized co-pilot". This
system will enable a pilot to off-load lower-level instrument
monitoring, control, and diagnostic functions, freeing him to focus on
high-priority decisions and actions.  The associate will be trainable
and personalizable to the requirements of specific missions and the
methods of an individual pilot.  It will heighten pilots' capabilities
to act effectively and decisively in high stress combat situations.

  The machine intelligence technology will also be applied in a
carrier battle-group battle management system. This system will aid in
the information fusion, option generation, decision making, and event
monitoring by the teams of people responsible for managing such
large-scale, fast-moving combat situations.

  The DARPA program will achieve its technical objectives and produce
machine intelligence technology by jointly exploiting a wide range of
recent scientific advances in artificial intelligence, computer
architecture, and microelectronics.

  Recent advances in artificial intelligence enable the codification
in sets of computer "rules" of the thinking processes that people use
to reason, plan, and make decisions.  For example, a detailed
codification of the thought processes and heuristics by which a person
finds his way through an unfamiliar city using a map and visual
landmarks might be employed as the basis of an experimental expert
system for local navigation (for the autonomous land vehicle).  Such
expert systems are already being successfully employed in medical
diagnosis, experiment planning in genetics, mineral exploration, and
other areas of complex human expertise.

  Expert systems can often be decomposed into separate segments that
can be processed concurrently. For example, one might search for a
result along many paths in parallel, taking the first satisfactory
solution and then proceeding on to other tasks.  In many expert
systems rules simply "lay in wait" - firing only if a specific
situation arises. Different parts of such a system could be operated
concurrently to watch for the individual contexts in which their rules
are to fire.

  DARPA plans to develop special computers that will exploit
opportunities for concurrent processing of expert systems.  This
approach promises a large increase in the power and intelligence of
such systems.  Using "coarse-mesh" machines consisting of multiple
microprocessors, an increase in power of a factor of one hundred over
current systems will be achievable within a few years.  By creating
special VLSI chip designs containing multiple "fine-mesh" processors,
by populating entire silicon wafers with hundreds of such chips, and
by using high-bandwidth optoelectronic cables to interconnect groups
of wafers, increases of three or four orders of magnitude in symbol
processing and rule-firing rates will be achieved as the research
program matures. While the program will rely heavily on silicon
microelectronics for high-density processing structures, extensive use
will also be made of gallium arsenide technology for high-rate signal
processing, optoelectronics, and for military applications requiring
low-power dissipation and high-immunity to radiation.

  The expert system technology will enable the DARPA computers to
"think smarter."  The special architectures for concurrency and the
faster, denser VLSI microelectronics will enable them to "think harder
and faster."  The combination of these approaches promises to be
potent indeed.

  But machines that mimic thinking are not enough by themselves. They
must be provided with sensory devices that mimic the functions of eyes
and ears. They must have the ability to see their environment, to hear
and understand human language, and to respond in kind.

  Huge computer processing rates will be required to provide effective
machine vision and machine understanding of natural language.  Recent
advances in the architecture of special processor arrays promise to
provide the required rates.  By patterning many small special
processors together on a silicon chip, computer scientists can now
produce simple forms of machine vision in a manner analogous to that
used in the retina of the eye. Instead of each image pixel being
sequentially processed as when using a standard von Neumann computer,
the new processor arrays allow thousands of pixels to be processed
simultaneously. Each image pixel is processed by just a few transistor
switches located close together in a processor cell that communicates
over short distances with neighboring cells.  The number of
transistors required to process each pixel can be perhaps one
one-thousandth of that employed in a von Neumann machine, and the
short communications distances lead to much faster processing rates
per pixel. All these effects multiply the factor of thousands gained
by concurrency.  The DARPA program plans to provide special vision
subsystems that have rates as high as one trillion von Neumann
equivalent operations per second as the program matures in the late
1980's.

  The DARPA Strategic Computing plan calls for the rapid evolution of
a set of prototype intelligent computers, and their experimental
application in military test-bed environments.  The planned activities
will lead to a series of demonstrations of increasingly sophisticated
machine intelligence technology in the selected applications as the
program progresses.

  DARPA will utilize an extensive infrastructure of computers,
computer networks, rapid system prototyping services, and silicon
foundries to support these technology explorations.  This same
infrastructure will also enable the sharing and propagation of
successful results among program participants.  As experimental
intelligent machines are created in the program, some will be added to
the computer network resources - further enhancing the capabilities of
the research infrastructure.


  The Strategic Computing program will be coordinated closely with
Under Secretary of Defense Research and Engineering, the Military
Services, and other Defense Agencies.  A number of advisory panels and
working groups will also be constituted to assure inter-agency
coordination and maintain a dialogue within the scientific community.

  The program calls for a cooperative effort among American industry,
universities, other research institutions, and government.
Communication is critical in the management of the program since many
of the contibutors will be widely dispersed throughout the U.S.  Heavy
use will be made of the Defense Department's ARPANET computer network
to link participants and to establish a productive research
environment.

  Ms. Lynn Conway, Assistant Director for Strategic Computing in
DARPA's Information Processing Techniques Office, will manage the new
program.  Initial program funding is set at $50M in fiscal 1984. It is
proposed at $95M in FY85, and estimated at $600M over the first five
years of the program.

  The successful achievement of the objectives of the Strategic
Computing program will lead to the deployment of a new generation of
military systems containing machine intelligence technology.  These
systems promise to provide the United States with important new
methods of defense against both massed forces and unconventional
threats in the future - methods that can raise the threshold and
decrease the likelihood of major conflict.

------------------------------

End of AIList Digest
********************

∂20-Nov-83  1722	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #100    
Received: from SRI-AI by SU-AI with TCP/SMTP; 20 Nov 83  17:21:47 PST
Date: Sunday, November 20, 1983 2:53PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #100
To: AIList@SRI-AI


AIList Digest            Sunday, 20 Nov 1983      Volume 1 : Issue 100

Today's Topics:
  Intelligence - Definition & Msc.,
  Looping Problem - The Zahir,
  Scientific Method - Psychology
----------------------------------------------------------------------

Date: Wed, 16 Nov 1983 10:48:34 EST
From: AXLER.Upenn-1100@Rand-Relay (David M. Axler - MSCF Applications Mgr.)
Subject: Intelligence and Categorization

     I think Tom Portegys' comment in 1:98 is very true.  Knowing whether or
not a thing is intelligent, has a soul, etc., is quite helpful in letting
us categorize it.  And, without that categorization, we're unable to know
how to understand it.  Two minor asides that might be relevant in this
regard:

     1)  There's a school of thought in the fields of linguistics, folklore,
anthropology, and folklore, which is based on the notion (admittedly arguable)
that the only way to truly understand a culture is to first record and
understand its native categories, as these structure both its language and its
thought, at many levels.  (This ties in to the Sapir-Whorf hypothesis that
language structures culture, not the reverse...)  From what I've read in this
area, there is definite validity in this approach.  So, if it's reasonable to
try and understand a culture in terms of its categories (which may or may not
be translatable into our own culture's categories, of course), then it's
equally reasonable for us to need to categorize new things so that we can
understand them within our existing framework.

     2)  Back in medieval times, there was a concept known as the "Great
Chain of Being", which essentially stated that everything had its place in
the scheme of things; at the bottom of the chain were inanimate things, at the
top was God, and the various flora and fauna were in-between.  This set of
categories structured a lot of medieval thinking, and had major influences on
Western thought in general, including thought about the nature of intelligence.
Though the viewpoint implicit in this theory isn't widely held any more, it's
still around in other, more modern, theories, but at a "subconscious" level.
As a result, the notion of 'machine intelligence' can be a troubling one,
because it implies that the inanimate is being relocated in the chain to a
position nearly equal to that of man.

I'm ranging a bit far afield here, but this ought to provoke some discussion...
Dave Axler

------------------------------

Date: 15 Nov 83 15:11:32-PST (Tue)
From: pur-ee!CS-Mordred!Pucc-H.Pucc-I.Pucc-K.ags @ Ucb-Vax
Subject: Re: Parallelism & Consciousness - (nf)
Article-I.D.: pucc-k.115

Faster = More Intelligent.  Now there's an interesting premise...

According to relativity theory, clocks (and bodily processes, and everything
else) run faster at the top of a mountain or on a plane than they do at sea
level.  This has been experimentally confirmed.

Thus it seems that one can become more intelligent merely by climbing a
mountain.  Of course the effect is temporary...

Maybe this is why we always see cartoons about people climbing mountains to
inquire about "the meaning of life" (?)

                                Dave Seaman
                                ..!pur-ee!pucc-k!ags

------------------------------

Date: 17 Nov 83 16:38 EST
From: Jim Lynch <jimlynch@nswc-wo>
Subject: Continuing Debate (discussion) on intelligence.

   I have enjoyed the continuing discussion concerning the definition of
intelligence and would only add a few thoughts.
   1.  I tend to agree with Minsky that intelligence is a social concept,
but I believe that it is probably even more of an emotional one. Intelligence
seems to fall in the same category with notions such as beauty, goodness,
pleasant, etc.  These concepts are personal, intensely so, and difficult to
describe, especially in any sort of quantitative terms.
   2.  A good part of the difficulty with defining Artificial Intelligence is
due, no doubt, to a lack of a good definition for intelligence.  We probablyy
cannot define AI until the psychologists define "I".
   3.  Continuing with 2, the definition probably should not worry us too much.
After all, do psychologists worry about "Natural Computation"?  Let us let the
psychologists worry about what intelligence is, let us worry about how to make
it artificial!!  (As has been pointed out many times, this is certainly an
iterative process and we can surely learn much from each other!).
   4.  The notion of intelligence seems to be a continuum; it is doubtful
that we can define a crisp and fine line dividing the intelligent from the
non-intelligent.  The current debate has provided enough examples to make
this clear.  Our job, therefore, is not to make computers intelligent, but
to make them more intelligent.
                              Thanks for the opportunity to comment,
                                     Jim Lynch, Dahlgren, Virginia

------------------------------

Date: Thu 17 Nov 83 16:07:41-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Intelligence

I had some difficultly refuting a friend's argument that intelligence
is "problem solving ability", and that deciding what problems to solve
is just one facet or level of intelligence.  I realize that this is
a vague definition, but does anyone have a refutation?

I think we can take for granted that summing the same numbers over and
over is not more intelligent than summing them once.  Discovering a
new method of summing them (e.g., finding a pattern and a formula for
taking advantage of it) is intelligent, however.  To some extent,
then, the novelty of the problem and the methods used in its solution
must be taken into account.

Suppose that we define intelligence in terms of the problem-solving
techniques available in an entity's repertoire.  A machine's intelligence
could be described much as a pocket calculator's capabilities are:
this one has modus ponens, that one can manipulate limits of series.
The partial ordering of such capabilities must necessarily be goal-
dependent and so should be left to the purchaser.

I agree with the AIList reader who defined an intelligent entity as
one that builds and refines knowledge structures representing its world.
Ability to manipulate and interconvert particular knowledge structures
fits well into the capability rating system above.  Learning, or ability
to remember new techniques so that they need not be rederived, is
downplayed in this view of intelligence, although I am sure that it is
more than just an efficiency hack.  Problem solving speed seems to be
orthogonal to the capability dimension, as does motivation to solve
problems.

                                        -- Ken Laws

------------------------------

Date: 16 Nov 83 4:21:55-PST (Wed)
From: harpo!seismo!philabs!linus!utzoo!utcsstat!laura @ Ucb-Vax
Subject: KILLING THINGS
Article-I.D.: utcsstat.1439

I think that one has to make a distinction between dolphins killing fish
to eat, and hypothetical turtles killing rabbits, not to eat, but because
they compete for the same land resources. To my mind they are different
sorts of killings (though from the point of veiw of the hapless rabbit
or fish they may be the same). Dolphins kill sharks that attack the school,
though -- I do not think that this 'self-defense' killing is the same as
the planned extermination of another species.

if you believe that planned extermination is the definition of intelligence
then I'll bet you are worried about SETI. On the other hand, I suppose you
must not believe that pacifist vegetarian monks qualify as intelligent.
Or is intelligence something posessed by a species rather than an individual?
Or perhaps you see that eating plants is indeed killing them. Now, we
have, defined all animals and plants like the venus fly-trap as intelligent
while most plants are not. All the protists that I can think of right now
would also be intelligent, though a euglena would be an interesting case.

I think that "killing things" is either too general or too specific
(depending on your definition of killing and which things you admit
to your list of "things") to be a useful guide for intelligence.

What about having fun? Perhaps the ability to laugh is the dividing point
between man (as  a higher intelligence) and animals, who seem to have
some appreciation for pleasure (if not fun) as distinct from plants and
protists whose joy I have never seen measured. Dolphins seem to have
a sense of fun as well, which is (to my mind) a very good thing.

What this bodes for Mr. Spock, though, is not nice. And despite
megabytes of net.jokes, this 11/70 isn't chuckling. :-)

Laura Creighton
utzoo!utcsstat!laura

------------------------------

Date: Sun 20 Nov 83 02:24:00-CST
From: Aaron Temin <CS.Temin@UTEXAS-20.ARPA>
Subject: Re: Artificial Humanity

I found these errors really interesting.

I would think a better rule for Eurisko to have used in the bounds
checking case would be to keep the bounds-checking code, but use it less
frequently, only when it was about to announce something as interesting,
for instance.  Then it may have caught the flip-flop error itself, while
still gaining speed other times.

The "credit assignment bug" makes me think Eurisko is emulating some
professors I have heard of....

The person bug doesn't even have to be bug.  The rule assumes that if a
person is around, then he or she will answer a question typed to a
console, perhaps?  Rather it should state that if a person is around,
Eurisko should ask THAT person the question.  Thus if Eurisko is a
person, it should have asked itself (not real useful, maybe, but less of
a bug, I think).

While computer enthusiasts like to speak of all programs in
anthropomorphic terms, Eurisko seems like one that might really deserve
that.  Anyone know of any others?

-aaron

------------------------------

Date: 13 Nov 83 10:58:40-PST (Sun)
From: ihnp4!houxm!hogpc!houti!ariel!vax135!cornell!uw-beaver!tektronix
      !ucbcad!notes @ Ucb-Vax
Subject: Re: the halting problem in history - (nf)
Article-I.D.: ucbcad.775

Halting problem, lethal infinite loops in consciousness, and the Zahir:

Borges' "Zahir" story was interesting, but the above comment shows just
how successful Borges is in his stylistic approach: by overwhelming the
reader with historical references, he lends legitimacy to an idea that
might only be his own.  Try tracking down some of his references some-
time--it's not easy!  Many of them are simply made up.

Michael Turner (ucbvax!ucbesvax.turner)

------------------------------

Date: 17 Nov 83 13:50:54-PST (Thu)
From: ihnp4!houxm!mhuxl!ulysses!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: I recall Rational Psychology
Article-I.D.: ncsu.2407

First, let's not revive the Rational Psychology debate. It died of natural
causes, and we should not disturb its immortal soul. However, F Montalvo
has said something very unpleasant about me, and I'm not quite mature
enough to ignore it.

I was not making an idle attack, nor do I do so with superficial knowledge.
Further, I have made quite similar statements in the presence of the
enemy -- card carrying psychologists.  Those psychologists whose egos are
secure often agree with the assesment.  Proper scientific method is very
hard to apply in the face of stunning lack of understanding or hard,
testable theories.  Most proper experiments are morally unacceptable in
the pschological arena.  As it is, there are so many controls not done,
so many sources of artifact, so much use of statistics to try to ferret
out hoped-for correlations, so much unavoidable anthropomorphism. As with
scholars such as H. Dumpty, you can define "science" to mean what you like,
but I think most psychological work fails the test.

One more thing, It's pretty immature to assume that someone who disagrees
with you has only superficial knowledge of the subject.  (See, I told you
I was not very mature ....)
----GaryFostel----

------------------------------

End of AIList Digest
********************

∂20-Nov-83  2100	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #101    
Received: from SRI-AI by SU-AI with TCP/SMTP; 20 Nov 83  20:59:28 PST
Date: Sunday, November 20, 1983 3:15PM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #101
To: AIList@SRI-AI


AIList Digest            Monday, 21 Nov 1983      Volume 1 : Issue 101

Today's Topics:
  Pattern Recognition - Forced Matching,
  Workstations - VAX,
  Alert - Computer Vision,
  Correction - AI Labs in IEEE Spectrum,
  AI - Challenge,
  Conferences - Announcements and Calls for Papers
----------------------------------------------------------------------

Date: Wed, 16 Nov 83 10:53 EST
From: Tim Finin <Tim.UPenn@Rand-Relay>
Subject: pattern matchers

     From: Stanley T. Shebs <SHEBS@UTAH-20.ARPA>
     Subject: Pattern Matchers
     ... My next puzzle is about pattern matchers.  Has anyone looked carefully
     at the notion of a "non-failing" pattern matcher?  By that I mean one that
     never or almost never rejects things as non-matching. ...

There is a long history of matchers which can be asked to "force" a match.
In this mode, the matcher is given two objects and returns a description
of what things would have to be true for the two objects to match.  Two such
matchers come immediately to my mind - see "How can MERLIN Understand?" by
Moore and Newell in Gregg (ed), Knowledge and Cognition, 1973, and also
"An Overview of KRL, A Knowledge Representation Language" by Bobrow and
Winograd (which appeared in the AI Journal, I believe, in 76 or 77).

------------------------------

Date: Fri 18 Nov 83 09:31:38-CST
From: CS.DENNEY@UTEXAS-20.ARPA
Subject: VAX Workstations

I am looking for information on the merits (or lack of) of the
VAX Workstation 100 for AI development.

------------------------------

Date: Wed, 16 Nov 83 22:22:03 pst
From: weeks%ucbpopuli.CC@Berkeley (Harry Weeks)
Subject: Computer Vision.

There have been some recent articles in this list on computer
vision, some of them queries for information.  Although I am
not in this field, I read with interest a review article in
Nature last week.  Since Nature may be off the beaten track for
many people in AI (in fact articles impinging on computer science
are rare, and this one probably got in because it also falls
under neuroscience), I'm bringing the article to the attention of
this list.  The review is entitled ``Parallel visual computation''
and appears in Vol 306, No 5938 (3-9 November), page 21.  The
authors are Dana H Ballard, Geoffrey E Hinton and Terrence J
Sejnowski.  There are 72 references into the literature.

                                                Harry Weeks
                                                g.weeks@Berkeley

------------------------------

Date: 17 Nov 83 20:25:30-PST (Thu)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: IEEE Spectrum Alert - (nf)
Article-I.D.: uiucdcs.3909


For safety's sake, let me add a qualification about the table on sources of
funding: it's incorrect. The University of Illinois is represented as having
absolutely NO research in 5th-generation AI, not even under OTHER funding.
This is false, and will hopefully be rectified in the next issue of the
Spectrum. I believe a delegation of our Professors is flying to the coast to
have a chat with the Spectrum staff ...

If we can be so misrepresented, I wonder how the survey obtained its
information. None of our major AI researchers remember any attempts to survey
their work.

                                        Marcel Schoppers
                                        U of Illinois @ Urbana-Champaign

------------------------------

Date: 17 Nov 83 20:25:38-PST (Thu)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: Re: just a reminder... - (nf)
Article-I.D.: uiucdcs.3910

I agree [with a previous article].
I myself am becoming increasingly worried about a blithe attitude I
sometimes hear: if our technology eliminates some jobs, it will create others.
True, but not everyone will be capable of keeping up with the change.
Analogously, the Industrial Revolution is now seen as a Good Thing, and its
impacts were as profound as those promised by AI. And though it is said that
the growth of knowledge can only be advantageous in the long run (Logical
Positivist view?), many people became victims of the Revolution.

In this respect I very much appreciated an idea that was aired at IJCAI-83,
namely that we should be building expert systems in economics to help us plan
and control the effects of our research.

As for the localization of power, that seems almost inevitable. Does not the
US spend enough on cosmetics to cover the combined Gross National Products of
37 African countries? And are we not so concerned about our Almighty Pocket
that we simply CANNOT export our excess groceries to a needy country, though
the produce rot on our dock? Then we can also keep our technology to ourselves.

One very obvious, and in my opinion sorely needed, application of AI is to
automating legal, veterinary and medical expertise. Of course the law system
and our own doctors will give us hell for this, but on the other hand what kind
of service profession is it that will not serve except at high cost? Those most
in need cannot afford the price. See for yourself what kind of person makes it
through Medical School: those who are most aggressive about beating their
fellow students, or those who have the money to buy their way in. It is little
wonder that so few of them will help the under-priviledged -- from the start
the selection criteria wage against such motivation. Let's send our machines
in where our "doctors" will not go!

                                        Marcel Schoppers
                                        U of Illinois @ Urbana-Champaign

------------------------------

Date: 19 Nov 83 09:22:42 EST (Sat)
From: rej@Cornell (Ralph Johnson)
Subject: The AI Challenge

The recent discussions on AIlist have been boring, so I have another
idea for discussion.  I see no evidence that that AI is going to make
as much of a change on the world as data processing or information
retrieval.  While research in AI has produced many results in side areas
such as computer languages, computer architecture, and programming
environments, none of the past promises of AI (automatic language
translation, for example) have been fulfilled.  Why should I expect
anything more in the future?

I am a soon-to-graduate PhD candidate at Cornell.  Since Cornell puts
little emphasis on AI, I decided to learn a little on my own.  Most AI
literature is hard to read, as very little concrete is said.  The best
book that I read (best for someone like me, that is) was the three-volume
"Handbook on Artificial Intelligence".  One interesting observation was
that I already knew a large percentage of the algorithms.  I did not
even think of most of them as being AI algorithms.  The searching
algorithms (with the exception of alpha beta pruning) are used in many
areas, and algorithms that do logical deduction are part of computational
mathematics (just my opinion, as I know some consider this hard core AI).
Algorithms in areas like computer vision were completely new, but I could
see no relationship between those algorithms and algorithms in programs
called "expert systems", another hot AI topic.

  [Agreed, but the gap is narrowing.  There have been 1 or 2 dozen
  good AI/vision dissertations, but the chief link has been that many
  individuals and research departments interested in one area have
  also been interested in the other.  -- KIL]

As for expert systems, I could see no relationship between one expert system
and the next.  An expert system seems to be a program that uses a lot of
problem-related hacks to usually come up with the right answer.  Some of
the "knowledge representation" schemes (translated "data structures") are
nice, but everyone seems to use different ones.  I have read several tech
reports describing recent expert systems, so I am not totally ignorant.
What is all the noise about?  Why is so much money being waved around?
There seems to be nothing more to expert systems than to other complicated
programs.

  [My own somewhat heretical view is that the "expert system" title
  legitimizes something that every complicated program has been found
  to need: hackery.  A rule-based system is sufficiently modular that
  it can be hacked hundreds of times before it is so cumbersome
  that the basic structures must be rewritten.  It is software designed
  to grow, as opposed to the crystalline gems of the "optimal X" paradigm.
  The best expert systems, of course, also contain explanatory capabilities,
  hierarchical inference, constrained natural language interfaces, knowledge
  base consistency checkers, and other useful features.  -- KIL]

I know that numerical analysis and compiler writing are well developed fields
because there is a standard way of thinking that is associated with each
area and because a non-expert can use tools provided by experts to perform
computation or write a parser without knowing how the tools work.  In fact,
a good test of an area within computer science is whether there are tools
that a non-expert can use to do things that, ten years ago, only experts
could do.  Is there anything like this in AI?  Are there natural language
processors that will do what YACC does for parsing computer languages?

There seem to be a number of answers to me:

1)  Because of my indoctrination at Cornell, I categorize much of the
    important results of AI in other areas, thus discounting the achievements
    of AI.

2)  I am even more ignorant than I thought, and you will enlighten me.

3)  Although what I have said describes other areas of AI pretty much, yours
    is an exception.

4)  Although what I have said describes past results of AI, major achievements
    are just around the corner.

5)  I am correct.

You may be saying to yourself, "Is this guy serious?"  Well, sort of.  In
any case, this should generate more interesting and useful information
than trying to define intelligence, so please treat me seriously.

        Ralph Johnson

------------------------------

Date: Thu 17 Nov 83 16:57:55-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Conference Announcements and Call for Papers

                [Reprinted from the SU-SCORE bboard.]

Image Technology 1984 37th annual conference  May 20-24, 1984
Boston, Mass.  Jim Clark, papers chairman

British Robot Association 7th annual conference  14-17, May 1984
Cambridge, England   Conference director-B.R.A. 7,
British Robot Association, 28-30 High Street, Kempston, Bedford
MK427AJ, England

First International Conference on Computers and Applications
Beijing, China, June 20-22, 1984   co-sponsored by CIE computer society
and IEEE computer society

CMG XIV conference on computer evaluation--preliminary agenda
December 6-9, 1983  Crystal City, Va.

International Symposium on Symbolic and Algebraic Computation
EUROSAM 84  Cambridge, England July 9-11, 1984  call for papers
M. Mignotte, Centre de Calcul, Universite Louis Pasteur, 7 rue
rene Descartes, F67084 Strasvourg, France

ACM Computer Science Conference  The Future of Computing
February 14-16, 1984  Philadelphia, Penn. Aaron Beller, Program
Chair, Computer and Information Science Department, Temple University
Philadelphia, Penn. 19122

HL

------------------------------

Date: Fri 18 Nov 83 04:00:10-CST
From: Werner Uhrig  <CMP.WERNER@UTEXAS-20.ARPA>
Subject: ***** Call for Papers:  LISP and Functional Programming *****

please help spread the word by announcing it on your local machines.  thanks
                ---------------

()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()
()                                CALL FOR PAPERS                           ()
()                             1984 ACM SYMPOSIUM ON                        ()
()                        LISP AND FUNCTIONAL PROGRAMMING                   ()
()                UNIVERSITY OF TEXAS AT AUSTIN, AUGUST 5-8, 1984           ()
()            (Sponsored by the ASSOCIATION FOR COMPUTING MACHINERY)        ()
()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()

This is the third in a series of biennial conferences on the LISP language and
issues related to applicative languages.  Especially welcome are papers
addressing implementation problems and programming environments.  Areas of
interest include (but are not restricted to) systems, large implementations,
programming environments and support tools, architectures, microcode and
hardware implementations, significant language extensions, unusual applications
of LISP, program transformations, compilers for applicative languages, lazy
evaluation, functional programming, logic programming, combinators, FP, APL,
PROLOG, and other languages of a related nature.

Please send eleven (11) copies of a detailed summary (not a complete paper) to
the program chairman:

        Guy L. Steele Jr.
        Tartan Laboratories Incorporated
        477 Melwood Avenue
        Pittsburgh, Pennsylvania  15213

Submissions will be considered by each member of the program committee:

 Robert Cartwright, Rice            William L. Scherlis, Carnegie-Mellon
 Jerome Chailloux, INRIA            Dana Scott, Carnegie-Mellon
 Daniel P. Friedman, Indiana        Guy L. Steele Jr., Tartan Laboratories
 Richard P. Gabriel, Stanford       David Warren, Silogic Incorporated
 Martin L. Griss, Hewlett-Packard   John Williams, IBM
 Peter Henderson, Stirling

Summaries should explain what is new and interesting about the work and what
has actually been accomplished.  It is important to include specific findings
or results and specific comparisons with relevant previous work.  The committee
will consider the appropriateness, clarity, originality, practicality,
significance, and overall quality of each summary.  Time does not permit
consideration of complete papers or long summaries; a length of eight to twelve
double-spaced typed pages is strongly suggested.

February 6, 1984 is the deadline for the submission of summaries.  Authors will
be notified of acceptance or rejection by March 12, 1984.  The accepted papers
must be typed on special forms and received by the program chairman at the
address above by May 14, 1984.  Authors of accepted papers will be asked to
sign ACM copyright forms.

Proceedings will be distributed at the symposium and will later be available
from ACM.

Local Arrangements Chairman             General Chairman

Edward A. Schneider                     Robert S. Boyer
Burroughs Corporation                   University of Texas at Austin
Austin Research Center                  Institute for Computing Science
12201 Technology Blvd.                  2100 Main Building
Austin, Texas 78727                     Austin, Texas 78712
(512) 258-2495                          (512) 471-1901
CL.SCHNEIDER@UTEXAS-20.ARPA             CL.BOYER@UTEXAS-20.ARPA

------------------------------

End of AIList Digest
********************

∂22-Nov-83  1724	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #102    
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Nov 83  17:23:17 PST
Date: Tuesday, November 22, 1983 10:31AM
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #102
To: AIList@SRI-AI


AIList Digest            Tuesday, 22 Nov 1983     Volume 1 : Issue 102

Today's Topics:
  AI and Society - Expert Systems,
  Scientific Method - Psychology,
  Architectures - Need for Novelty,
  AI - Response to Challenge
----------------------------------------------------------------------

Date: 20 Nov 83 14:50:23-PST (Sun)
From: harpo!floyd!clyde!akgua!psuvax!simon @ Ucb-Vax
Subject: Re: just a reminder... - (nf)
Article-I.D.: psuvax.357

It seems a little dangerous "to send machines where doctors won't go" -
you'll get the machines treating the poor, and human experts for the privileged
few.
Also, expert systems for economics and social science, to help us would be fine,
if there was a convincing argument that a)these social sciences are truly
helpful for coping with unpredictable technological change, and b) that there
is a sufficiently accepted basis of quantifiable knowledge to put in the
proposed systems.
janos simon

------------------------------

Date: Mon, 21 Nov 1983  15:24 EST
From: MONTALVO%MIT-OZ@MIT-MC.ARPA
Subject: I recall Rational Psychology

    Date: 17 Nov 83 13:50:54-PST (Thu)
    From: ihnp4!houxm!mhuxl!ulysses!unc!mcnc!ncsu!fostel @ Ucb-Vax
    Subject: I recall Rational Psychology

          ... Proper scientific method is very
    hard to apply in the face of stunning lack of understanding or hard,
    testable theories.  Most proper experiments are morally unacceptable in
    the pschological arena.  As it is, there are so many controls not done,
    so many sources of artifact, so much use of statistics to try to ferret
    out hoped-for correlations, so much unavoidable anthropomorphism. As with
    scholars such as H. Dumpty, you can define "science" to mean what you like,
    but I think most psychological work fails the test.

    ----GaryFostel----

You don't seem to be aware of Experimental Psychology, which involves
subjects' consent, proper controls, hypothesis formation and
evaluation, and statistical validation.  Most of it involves sensory
processes and learning.  The studies are very rigorous and must be so
to end up in the literature.  You may be thinking of Clinical Psychology.
If so, please don't lump all of Psychology into the same group.

Fanya Montalvo

------------------------------

Date: 19 Nov 83 11:15:50-PST (Sat)
From: decvax!tektronix!ucbcad!notes @ Ucb-Vax
Subject: Re: parallelism vs. novel architecture - (nf)
Article-I.D.: ucbcad.835

Re: parallelism and fundamental discoveries

The stored-program concept (Von Neumann machine) was indeed a breakthrough
both in the sense of Turing (what is theoretically computable) and in the
sense of Von Neuman (what is a practical machine).  It is noteworthy,
however, that I am typing this message using a text editor with a segment
of memory devoted to program, another segment devoted to data, and with an
understanding on the part of the operating system that if the editor were
to try to alter one of its own instructions, the operating system should
treat this as pathological, and abort it.

In other words, the vaunted power of being able to write data that can be
executed as a program is treated in the most stilted and circumspect manner
in the interests of practicality.  It has been found to be impractical to
write programs that modify their own inner workings.  Yet people do this to
their own consciousness all the time--in a largely unconscious way.

Turing-computability is perhaps a necessary condition for intelligence.
(That's been beaten to death here.)  What is needed is a sufficient condition.
Can that possibly be a single breakthrough or innovation?  There is no
question that, working from the agenda for AI that was so hubristically
layed out in the 50's and 60's, such a breakthrough is long overdue.  Who
sees any intimation of it now?

Perhaps what is needed is a different kind of AI researcher.  New ground
is hard to break, and harder still when the usual academic tendency is to
till old soil until it is exhausted.  I find it interesting that many of
the new ideas in AI are coming from outside the U.S. AI establishment
(MIT, CMU, Stanford, mainly).  Logic programming seems largely to be a
product of the English-speaking world *apart* from the U.S.  Douglas
Hofstadter's ideas (though probably too optimistic) are at least a sign
that, after all these years, some people find the problem too important
to be left to the experts.  Tally Ho!  Maybe AI needs a nut with the
undaunted style of a Nicola Tesla.

Some important AI people say that Hofstadter's schemes can't work.  This
makes me think of the story about the young 19th century physicist, whose
paper was reviewed and rejected as meaningless by 50 prominent physicists
of the time.  The 51st was Maxwell, who had it published immediately.

Michael Turner (ucbvax!ucbesvax.turner)

------------------------------

Date: 20 November 1983 2359-PST (Sunday)
From: helly at AEROSPACE (John Helly)
Subject: Challenge

I  am  responding  to  Ralph  Johnson's  recent submittal concerning the
content and contribution of work in the field  of  AI.    The  following
comments  should  be  evaluated in light of the fact that I am currently
developing an 'expert system' as a dissertation topic at UCLA.

My immediate reaction to Johnson's queries/criticisms of AI is  that  of
hearty  agreement.    Having  read  a  great  deal  of AI literature, my
personal bias is that there is a great deal of rediscovery of  Knuth  in
the  context of new applications.  The only things apparently unique are
that each new 'discovery' carries with  it  a  novel  jargon  with  very
little attempt to connect and build on previous work in the field.  This
reflects a broader concern I have with Computer Science  in  general  in
that,  having been previously trained as a biologist, I find very little
that I consider scientific in this field.  This  does  not  diminish  my
hope for, and consequently my commitment to, work in this area.

Like  many things, this commitment is based on my intuition (read faith)
that there really is something  of  value  in  this  field.    The  only
rationale  I can offer for such a commitment is the presumption that the
lack of progress in AI research is the result of the lack of  scientific
discipline of AI researchers and computer scientists in general.  The AI
community looks much more like a heterogeneous population of hackers than
that  of a disciplined, scientific community.  Maybe this is symptomatic
of a new field of science going through  growing  pains  but  I  do  not
personally  believe  this  is  the  case.    I am unaware of any similar
developmental process in the history of science.

This all sounds pretty negative, I  know.    I  believe  that  criticism
should  always  be  stated with some possible corrective action, though,
and maybe I have some.  Computer science curricula should require formal
scientific training.  Exposure to truly empirical sciences  would  serve
to   familiarize   students  with  the  value  of  systematic  research,
experimental design, hypothesis testing and the like.   We  should  find
ways  to  apply  the  scientific  method  to  our  research  rather than
collecting  a  lot  of  anecdotal  information  about  our  'programming
environment' and 'heuristics' and publishing it at first light.

Maybe the computer science is basically an engineering discipline (i.e.,
application-oriented)  rather  than a science.  I believe, however, that
in the least computer science, even if misnamed, offers  powerful  tools
for  investigating  human  information processing (i.e, intelligence) if
approached scientifically.  Properly applied these tools can provide the
same benefits they  have  offered  physicists,  biologists  and  medical
researchers  - insight into mechanisms and techniques for simulating the
systems of interest.

Much of AI is very slick programming.  I'm just not certain that  it  is
anything more than that, at least at present.

------------------------------

Date: Mon 21 Nov 83 14:12:35-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Reply to Ralph Johnson

Your recent msg to AILIST was certainly provocative, and I thought I'd
try to reply to a couple of the points you made.  First, I'm a little
appalled at what you portray as the "Cornell" attitude towards AI.  I
hope things will improve there in the future.  Maybe I can contribute
a little by trying to persuade you that AI has substance.

I'd like to begin by calling attention to the criteria that you are
using to evaluate AI.  I believe that if you applied these same
criteria to other areas of computer science, you would find them
lacking also.  For example, you say that "While research in AI has
produced many results in side areas..., none of the past promises of
AI have been fulfilled."  If we look at other fields of computer
science, we find similar difficulties.  Computer science has promised
secure, reliable, user-friendly computing facilities, cheap and robust
distributed systems, integrated software tools.  But what do we have?
Well, we have some terrific prototypes in research labs, but the rest
of the world is still struggling with miserable computing
environments, systems that constantly crash, and distributed systems
that end up being extremely expensive and unreliable.

The problem with this perspective is that it is not fair to judge a
research discipline by the success of its applications.  In AI
research labs, AI has delivered on many of its early promises.  We now
have machines with limited visual and manipulative capabilities.  And
we do have systems that perform automatic language translation (e.g.,
at Texas).

Another difficulty of judging AI is that it is a "residual"
discipline.  As Avron Barr wrote in the introduction to the AI
Handbook, "The realization that the detailed steps of almost all
intelligent human activity were unknown marked the beginning of
Artificial Intelligence as a separate part of computer science."  AI
tackles the hardest application problems around: those problems whose
solution is not understood.  The rest of computer science is primarily
concerned with finding optimum points along various solution
dimensions such as speed, memory requirements, user interface
facilities, etc.  We already knew HOW to sort numbers before we had
computers.  The role of Computer Science was to determine how to sort
them quickly and efficiently using a computer.  But, we didn't know
HOW to understand language (at least not at a detailed level).  AI's
task has been to find solutions to these kinds of problems.

Since AI has tackled the most difficult problems, it is not surprising
that it has had only moderate success so far.  The bright side of
this, however, is that long after we have figured out whether P=NP, AI
will still be uncovering fascinating and difficult problems.  That's
why I study it.

You are correct in saying that the AI literature is hard to read.  I
think there are several reasons for this.  First, there is a very
large amount of terminology to master in AI.  Second, there are great
differences in methodology.  There is no general agreement within the
AI community about what the hard problems are and how they should be
addressed (although I think this is changing).  Good luck with any
further reading that you attempt.

Now let me address some of your specific observations about AI.  You
say "I already knew a large percentage of the algorithms.  I did not
even think of most of them as being AI algorithms."  I would certainly
agree.  I cite this as evidence that there is a unity to all parts of
computer science, including AI.  You also say "An expert system seems
to be a program that uses a lot of problem-related hacks to usually
come up with the right answer."  I think you have hit upon the key
lesson that AI learned in the seventies: The solution to many of the
problems we attack in AI lies NOT in the algorithms but in the
knowledge.  That lesson reflects itself, not so much in differences in
code, but in differences in methodology.  Expert systems are different
and important because they are built using a programming style that
emphasizes flexibility, transparency, and rapid prototyping over
efficiency.  You say "There seems to be nothing more to expert systems
than to other complicated programs".  I disagree completely.  Expert
systems can be built, debugged, and maintained more cheaply than other
complicated programs.  And hence, they can be targeted at applications
for which previous technology was barely adequate.  Expert systems
(knowledge programming) techniques continue the revolution in
programming that was started with higher-level languages and furthered
by structured programming and object-oriented programming.

Your view of "knowledge representations" as being identical with data
structures reveals a fundamental misunderstanding of the knowledge vs.
algorithms point.  Most AI programs employ very simple data structures
(e.g., record structures, graphs, trees).  Why, I'll bet there's not a
single AI program that uses leftist-trees or binomial queues!  But, it
is the WAY that these data structures are employed that counts.  For
example, in many AI systems, we use record structures that we call
"schemas" or "frames" to represent domain concepts.  This is
uninteresting.  But what is interesting is that we have learned that
certain distinctions are critical, such as the distinction between a
subset of a set and an element of a set.  Or the distinction between a
causal agent of a disease (e.g., a bacterium) and a feature that is
helpful in guiding diagnosis (e.g., whether or not the patient has
been hospitalized).  Much of AI is engaged in finding and cataloging
these distinctions and demonstrating their value in simplifying the
construction of expert systems.

In your message, you gave five possible answers that you expected to
receive.  I guess mine doesn't fit any of your categories.  I think
you have been quite perceptive in your analysis of AI.  But you are
still looking at AI from the "algorithm" point of view.  If you shift
to the "knowledge" perspective, your criteria for evaluating AI will
shift as well, and I think you will find the field to be much more
interesting.

--Tom Dietterich

------------------------------

Date: 22 Nov 83 11:45:30 EST (Tue)
From: rej@Cornell (Ralph Johnson)
Subject: Clarifying my "AI Challange"

I am sorry to create the mistaken impression that I don't think AI should
be done or is worth the money we spend on it.  The side effects alone are
worth much more than has been spent.  I do understand the effects of AI on
other areas of CS.  Even though going to the moon brought no direct benefit
to the US outside of prestige (which, by the way, was enormous), we learned
a lot that was very worthwhile.  Planetary scientists point out that we
would have learned a lot more if we had spent the money directly on planetary
exploration, but the moon race captured the hearts of the public and allowed
the money to be spent on space instead of bombs.  In a similar way, AI
provides a common area for some of our brightest people to tackle very hard
problems, and consequently learn a great deal.  My question, though, is
whether AI is really going to change the world any more than the rest of
computer science is already doing.  Are the great promises of AI going to
be fulfilled?

I am thankful for the comments on expert systems.  Following these lines of
reasoning, expert systems are differentiated from other programs more by the
programming methodology used than by algorithms or data structures.  It is
very helpful to have these distinctions pointed out, and has made several
ideas clearer to me.

The ideas in AI are not really any more difficult than those in other areas
of CS, they are just more poorly explained.  Several times I have run in to
someone who can explain well the work that he/she has been doing, and each
time I understand what they are doing.  Consequently, I believe that the
reason that I see few descriptions of how systems work is because the
designers are not sure how they work, or they do not know what is important
in explaining how they work, or they do not know that it is important to
explain how they work.  Are they, in fact, describing how they work, and I
just don't notice?  What I would like is more examples of systems that work,
descriptions of how they work, and of how well they work.

        Ralph Johnson (rej@cornell,  cornell!rej)

------------------------------

Date: Tue 22 Nov 83 09:25:52-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Re: Clarifying my "AI Challange"

Ralph,

I can think of a couple of reasons why articles describing Expert
Systems are difficult to follow.  First, these programs are often
immense.  It would take a book to describe all of the system and how
it works.  Hence, AI authors try to pick out a few key things that
they think were essential in getting the system to work.  It is kind
of like reading descriptions of operating systems.  Second, the lesson
that knowledge is more important than algorithm has still not been
totally accepted within AI.  Many people tend to describe their
systems by describing the architecture (ie., the algorithms and data
structures) instead of the knowledge.  The result is that the reader
is left saying "Yes, of course I understand how backward chaining (or
an agenda system) works, but I still don't understand how it diagnoses
soybean diseases..."  The HEARSAY people are particularly guilty of
this.  Also, Lenat's dissertation includes much more discussion of
architecture than of knowledge.  It often takes many years before
someone publishes a good analysis of the structure of the knowledge
underlying the expert performance of the system.  A good example is
Bill Clancey's work analyzing the MYCIN system.  See his most recent
AI Journal paper.

--Tom

------------------------------

End of AIList Digest
********************

∂27-Nov-83  2131	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #103    
Received: from SRI-AI by SU-AI with TCP/SMTP; 27 Nov 83  21:30:05 PST
Date: Fri Nov 25, 1983 09:29-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #103
To: AIList@SRI-AI


AIList Digest            Friday, 25 Nov 1983      Volume 1 : Issue 103

Today's Topics:
  Alert - Neural Network Simulations & Weizenbaum on The Fifth Generation,
  AI Jargon - Why AI is Hard to Read,
  AI and Automation - Economic Effects & Reliability,
  Conference - Logic Programming Symposium
----------------------------------------------------------------------

Date: Sun, 20 Nov 83 18:05 PST
From: Allen VanGelder <avg@diablo>
Subject: Those interested in AI might want to read ...

                [Reprinted from the SU-SCORE bboard.]

[Those interested in AI might want to read ...]
the article in November *Psychology Today* about Francis Crick and Graeme
Michison's neural network simulations. Title is "The Dream Machine", p. 22.

------------------------------

Date: Sun 20 Nov 83 18:50:27-PST
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Re: Those interested in AI might want to read...

                [Reprinted from the SU-SCORE bboard.]

I would guess that the "Psychology Today" article is a simplified form of the
Crick & Michelson paper which came out in "Nature" about 2 months ago. Can't
comment on the Psychology Today article, but the Nature article was
stimulating and provocative. The same issue of Nature has a paper (referred to
by Crick) of a simulation which was even better than the Crick paper
(sorry, Francis!).

------------------------------

Date: Mon 21 Nov 83 09:58:04-PST
From: Benjamin Grosof <GROSOF@SUMEX-AIM.ARPA>
Subject: Weizenbaum review of "The Fifth Generation": hot stuff!

                [Reprinted from the SU-SCORE bboard.]

The current issue of the NY REview of Books contains a review by Joseph
Weizenbaum of MIT (Author of "Computer Power and Human Reason", I think)
of Feigenbaum and McCorduck's "The Fifth Generation".  Warning: it is
scathing and controversial, hence great reading.  --Benjamin

------------------------------

Date: Wed 23 Nov 83 14:38:38-PST
From: Wilkins  <WILKINS@SRI-AI.ARPA>
Subject: why AI is hard to read

There is one reason much AI literature is hard to read. It is common for
authors to invent a whole new set of jargon to describe their system, instead
of desribing it in some common language (e.g., first order logic) or relating
it to previous well-understood systems or principles.  In recent years
there has been an increased awareness of this problem, and hopefully things
are improving and will continue to do so. There are also a lot more
submissions now to IJCAI, etc, so higher standards end up being applied.
Keep truckin'
David Wilkins

------------------------------

Date: 21 Nov 1983 10:54-PST
From: dietz%usc-cse%USC-ECL@SRI-NIC
Reply-to: dietz%USC-ECL@SRI-NIC
Subject: Economic effects of automation

Reply to Marcel Schoppers (AIList 1:101):

I agree that "computers will eliminate some jobs but create others" is
a feeble excuse.  There's not much evidence for it.  Even if it's true,
those whose jobs skills are devalued will be losers.

But why should this bother me?  I don't buy manufactured goods to
employ factory workers, I buy them to gratify my own desires.   As a
computer scientist I will not be laid off; indeed, automation will
increase the demand for computer professionals.  I will benefit from
the higher quality and lower prices of manufactured goods.  Automation
is entirely in my interest.  I need no excuse to support it.

   ... I very much appreciated the idea ... that we should be building
   expert systems in economics to help us plan and control the effects of
   our research.

This sounds like an awful waste of time to me.  We have no idea how to
predict the economic effects of much of anything except at the most
rudimentary levels, and there is no evidence that we will anytime soon
(witness the failure of econometrics).  There would be no way to test
the systems.  Building expert systems is not a substitute for
understanding.

Automating medicine and law:  a much better idea is to eliminate or
scale back the licensing requirements that allow doctors and lawyers to
restrict entry into their fields.  This would probably be necessary to
get much benefit from expert systems anyway.

------------------------------

Date: 22 Nov 83 11:27:05-PST (Tue)
From: decvax!genrad!security!linus!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: just a reminder... - (nf)
Article-I.D.: dciem.501

    It seems a little dangerous "to send machines where doctors won't go" -
    you'll get the machines treating the poor, and human experts for the
    privileged few.

If the machines were good enough, I wouldn't mind being underpriveleged.
I'd rather be flown into a foggy airport by autopilot than human pilot.

Martin Taylor
{allegra,linus,ihnp4,uw-beaver,floyd,ubc-vision}!utcsrgv!dciem!mmt

------------------------------

Date: 22 Nov 1983 13:06:13-EST (Tuesday)
From: Doug DeGroot <Degroot.YKTVMV.IBM@Rand-Relay>
Subject: Logic Programming Symposium (long message)

                 [Excerpt from a notice in the Prolog Digest.]

               1984 International Symposium on Logic Programming

                               February 6-9, 1984

                           Atlantic City, New Jersey
                           BALLY'S PARK PLACE CASINO

                     Sponsored by the IEEE Computer Society


          For more information contact PERIERA@SRI-AI or:

               Registration - 1984 ISLP
               Doug DeGroot, Program Chairman
               IBM Thomas J. Watson Research Center
               P.O. Box 218
               Yorktown Heights, NY 10598

          STATUS           Conference    Tutorial
          Member, IEEE      ←← $155      ←← $110
          Non-member        ←← $180      ←← $125
         ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

                              Conference Overview

          Opening Address:
             Prof. J.A. (Alan) Robinson
             Syracuse University

          Guest Speaker:
             Prof. Alain Colmerauer
             Univeristy of Aix-Marseille II
             Marseille, France

          Keynote Speaker:
             Dr. Ralph E. Gomory,
             IBM Vice President & Director of Research,
             IBM Thomas J. Watson Research Center

          Tutorial: An Introduction to Prolog
             Ken Bowen, Syracuse University

          35 Papers, 11 Sessions (11 Countries, 4 Continents)


          Preliminary Conference Program

          Session 1: Architectures I
          ←←←←←←←←←←←←←←←←←←←←←←←←←←

          1. Parallel Prolog Using Stack Segments on Shared-memory
             Multiprocessors
             Peter Borgwardt (Univ. Minn)

          2. Executing Distributed Prolog Programs on a Broadcast Network
             David Scott Warren (SUNY Stony Brook, NY)

          3. AND Parallel Prolog in Divided Assertion Set
             Hiroshi Nakagawa (Yokohama Nat'l Univ, Japan)

          4. Towards a Pipelined Prolog Processor
             Evan Tick (Stanford Univ,CA) and David Warren

          Session 2: Architectures II
          ←←←←←←←←←←←←←←←←←←←←←←←←←←←

          1. Implementing Parallel Prolog on a Multiprocessor Machine
             Naoyuki Tamura and Yukio Kaneda (Kobe Univ, Japan)

          2. Control of Activities in the OR-Parallel Token Machine
             Andrzej Ciepielewski and Seif Haridi (Royal Inst. of
             Tech, Sweden)

          3. Logic Programming Using Parallel Associative Operations
             Steve Taylor, Andy Lowry, Gerald Maguire, Jr., and Sal
             Stolfo (Columbia Univ,NY)

          Session 3: Parallel Language Issues
          ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

          1. Negation as Failure and Parallelism
             Tom Khabaza (Univ. of Sussex, England)

          2. A Note on Systems Programming in Concurrent Prolog
             David Gelertner (Yale Univ,CT)

          3. Fair, Biased, and Self-Balancing Merge Operators in
             Concurrent Prolog
             Ehud Shaipro (Weizmann Inst. of Tech, Israel)

          Session 4: Applications in Prolog
          ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

          1. Editing First-Order Proofs: Programmed Rules vs. Derived Rules
             Maria Aponte, Jose Fernandez, and Phillipe Roussel (Simon
             Bolivar Univ, Venezuela)

          2. Implementing Parallel Algorithms in Concurrent Prolog:
             The MAXFLOW Experience
             Lisa Hellerstein (MIT,MA) and Ehud Shapiro (Weizmann
             Inst. of Tech, Israel)

          Session 5: Knowledge Representation and Data Bases
          ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

          1. A Knowledge Assimilation Method for Logic Databases
             T. Miyachi, S. Kunifuji, H. Kitakami, K. Furukawa, A.
             Takeuchi, and H. Yokota (ICOT, Japan)

          2. Knowledge Representation in Prolog/KR
             Hideyuki Nakashima (Electrotechnical Laboratory, Japan)

          3. A Methodology for Implementation of a Knowledge
             Acquisition System
             H. Kitakami, S. Kunifuji, T. Miyachi, and K. Furukawa
             (ICOT, Japan)

          Session 6: Logic Programming plus Functional Programming - I
          ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

          1. FUNLOG = Functions + Logic: A Computational Model
             Integrating Functional and Logical Programming
             P.A. Subrahmanyam and J.-H. You (Univ of Utah)

          2. On Implementing Prolog in Functional Programming
             Mats Carlsson (Uppsala Univ, Sweden)

          3. On the Integration of Logic Programming and Functional Programming
             R. Barbuti, M. Bellia, G. Levi, and M. Martelli (Univ. of
             Pisa and CNUCE-CNR, Italy)

          Session 7: Logic Programming plus Functional Programming- II
          ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

          1. Stream-Based Execution of Logic Programs
             Gary Lindstrom and Prakash Panangaden (Univ of Utah)

          2. Logic Programming on an FFP Machine
             Bruce Smith (Univ. of North Carolina at Chapel Hill)

          3. Transformation of Logic Programs into Functional Programs
             Uday S. Reddy (Univ of Utah)

          Session 8: Logic Programming Implementation Issues
          ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

          1. Efficient Prolog Memory Management for Flexible Control Strategies
             David Scott Warren (SUNY at Stony Brook, NY)

          2. Indexing Prolog Clauses via Superimposed Code Words and
             Field Encoded Words
             Michael J. Wise and David M.W. Powers, (Univ of New South
             Wales, Australia)

          3. A Prolog Technology Theorem Prover
             Mark E. Stickel, (SRI, CA)

          Session 9: Grammars and Parsing
          ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

          1. A Bottom-up Parser Based on Predicate Logic: A Survey of
             the Formalism and Its Implementation Technique
             K. Uehara, R. Ochitani, O. Kakusho, and J. Toyoda (Osaka
             Univ, Japan)

          2. Natural Language Semantics: A Logic Programming Approach
             Antonio Porto and Miguel Filgueiras (Univ Nova de Lisboa,
             Portugal)

          3. Definite Clause Translation Grammars
             Harvey Abramson, (Univ. of British Columbia, Canada)

          Session 10: Aspects of Logic Programming Languages
          ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

          1. A Primitive for the Control of Logic Programs
             Kenneth M. Kahn (Uppsala Univ, Sweden)

          2. LUCID-style Programming in Logic
             Derek Brough (Imperial College, England) and Maarten H.
             van Emden (Univ. of Waterloo, Canada)

          3. Semantics of a Logic Programming Language with a
             Reducibility Predicate
             Hisao Tamaki (Ibaraki Univ, Japan)

          4. Object-Oriented Programming in Prolog
             Carlo Zaniolo (Bell Labs, New Jersey)

          Session 11: Theory of Logic Programming
          ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←

          1. The Occur-check Problem in Prolog
             David Plaisted (Univ of Illinois)

          2. Stepwise Development of Operational and Denotational
             Semantics for Prolog
             Neil D. Jones (Datalogisk Inst, Denmark) and Alan Mycroft
             (Edinburgh Univ, Scotland)
         ←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←


                           An Introduction to Prolog

                          A Tutorial by Dr. Ken Bowen

          Outline of the Tutorial

          -  AN OVERVIEW OF PROLOG
          -  Facts, Databases, Queries, and Rules in Prolog
          -  Variables, Matching, and Unification
          -  Search Spaces and Program Execution
          -  Non-determinism and Control of Program Execution
          -  Natural Language Processing with Prolog
          -  Compiler Writing with Prolog
          -  An Overview of Available Prologs

          Who Should Take the Tutorial

          The tutorial is intended for both managers and programmers
          interested in understanding the basics of logic programming
          and especially the language Prolog. The course will focus on
          direct applications of Prolog, such as natural language
          processing and compiler writing, in order to show the power
          of logic programming. Several different commercially
          available Prologs will be discussed and compared.

          About the Instructor

          Dr. Ken Bowen is a member of the Logic Programming Research
          Group at Syracuse University in New York, where he is also a
          Professor in the School of Computer and Information
          Sciences. He has authored many papers in the field of logic
          and logic programming. He is considered to be an expert on
          the Prolog programming language.

------------------------------

End of AIList Digest
********************

∂28-Nov-83  1357	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #104    
Received: from SRI-AI by SU-AI with TCP/SMTP; 28 Nov 83  13:56:35 PST
Date: Mon 28 Nov 1983 09:32-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #104
To: AIList@SRI-AI


AIList Digest            Monday, 28 Nov 1983      Volume 1 : Issue 104

Today's Topics:
  Information Retrieval - Request,
  Programming Languages - Lisp Productivity,
  AI and Society - Expert Systems,
  AI Funding - Capitalistic AI,
  Humor - Problem with Horn Clauses,
  Seminar - Introspective Problem Solver,
  Graduate Program - Social Impacts at UC-Irvine
----------------------------------------------------------------------

Date: Sun, 27 Nov 83 11:41 EST
From: Ed Fox <fox.vpi@Rand-Relay>
Subject: Request for machine readable volumes, info. on retrieval
         projects

   Please send details of how to obtain any machine readable documents such
as books, reference volumes, encyclopedias, dictionaries, journals, etc.
These would be utilized for experiments in information retrieval.  This
is not aimed at large bibliographic databases but rather at finding
a few medium to long items that exist both in book form and full text
computer tape versions (readable under UNIX or VMS).
   Information on existing or planned projects for retrieval of passages
(e.g., paragraphs or pages) from books, encyclopedias, electronic mail
digests, etc. would also be helpful.
     I look forward to your reply.  Thanks in advance, Ed Fox.
Dr. Edward A. Fox, Dept. of Computer Science, 562 McBryde Hall,
Virginia Polytechnic Institute and State University (VPI&SU or Virginia Tech),
Blacksburg, VA 24061; (703)961-5113 or 6931; fox%vpi@csnet-relay via csnet,
foxea%vpivm1.bitnet@berkeley via bitnet

------------------------------

Date: 25 Nov 83 22:47:27-PST (Fri)
From: pur-ee!uiucdcs!smu!leff @ Ucb-Vax
Subject: lisp productivity question - (nf)
Article-I.D.: uiucdcs.4149

Is anybody aware of study's on productivity studies for lisp?

1. Can lisp programmers program in lisp at the same number of
   lines per day,week,month as in 'regular' languages like pascal, pl/1, etc.

2. Has anybody tried to write a fairly large program that normally would
   be done in lisp in a regular language and compared the number of lines
   ratio.

In APL, a letter to Comm. ACM reported that APL programs took one fifth
the number of lines as equivalent programs in regular language and took
about twice as long per line to debug.  Thus APL improved the productivity
to get a function done by about a factor of two.  I am curious if anything
similar has been done for lisp.

  [One can, off course, write any APL program body as a single line.
  I suspect it would not take much longer to write that way, but it
  would be impossible to modify a week later.  Much the same could be
  said for undocumented and poorly structured Lisp code.  -- KIL]

------------------------------

Date: 22 Nov 83 21:01:33-PST (Tue)
From: decvax!genrad!grkermit!masscomp!clyde!akgua!psuvax!lewis @ Ucb-Vax
Subject: Re:Re: just a reminder... - (nf)
Article-I.D.: psuvax.359

Why should it be dangerous to have machines treating the poor?  There
is no reason to believe that human experts will always be superior to
machines; in fact, a carefully designed expert system could embody all
the skill of the world's best diagnosticians.  In addition, an expert
system would never get tired or complain about its pay.  On the
other hand, perhaps you are worried about the machine lacking 'human'
insight or compassion. I don't think anyone is suggesting that these
qualities can or should be built into such a system.  Perhaps we will
see a new generation of medical personnel whose job will be to use the
available AI facilities to make the most accurate diagnoses, and help
patients interface with the system.  This will provide patients with
the best medical knowledge available, and still allow personal interaction
between patients and technicians.

-jim lewis

psuvax!lewis

------------------------------

Date: 24 Nov 83 22:46:53-PST (Thu)
From: pur-ee!uiucdcs!uokvax!emjej @ Ucb-Vax
Subject: Re: just a reminder... - (nf)
Article-I.D.: uiucdcs.4127

Re sending machines where doctors won't go: do you really think that it's
better that poor people not be treated at all than treated by a machine?
That's a bit much for me to swallow.

                                                James Jones

------------------------------

Date: 22 Nov 83 19:37:14-PST (Tue)
From: pur-ee!uiucdcs!uicsl!Anonymous @ Ucb-Vax
Subject: Capitalistic AICapitalistic AI - (nf)
Article-I.D.: uiucdcs.4071

        Have you had your advisor leave to make megabucks in industry?

        Seriously, I feel that this is a major problem for AI.  There
is an extremely limited number of AI professors and a huge demand from
venture capitalists to set them up in a new company.  Even fresh PhD's
are going to be disappearing into industry when they can make several
times the money they would in academia.  The result is an acute (no
make that terminal) shortage of professors to oversee the new research
generation. The monetary imbalance can only grow as AI grows.

        At this university (UI) there are lots (hundreds?) of undergrads
who want to study AI; and about 8 professors to teach them. Maybe the
federal government ought to recognize that this imbalance hurts our
technological competitiveness. What will prevent academic flight?
Will IBM, Digital, and WANG support professors or will they start
hiring them away?

        Here are a few things needed to keep the schools strong:

                1) Higher salaries for profs in "critical areas."
                   (maybe much higher)

                2) Long term funding of research centers.
                   (buildings, equipment, staff)

                3) University administration support for capitalizing
                   on the results of research, either through making
                   it easy for a professor to maintain a dual life, or
                   by setting up a university owned company to develop
                   and sell the results of research.

------------------------------

Date: 14 Nov 83 17:26:03-PST (Mon)
From: harpo!floyd!clyde!akgua!psuvax!burdvax!sjuvax!bbanerje @ Ucb-Vax
Subject: Problem with Horn Clauses.
Article-I.D.: sjuvax.140

As a novice to Prolog, I have a problem determining whether a
clause is Horn, or non Horn.

I understand that a clause of the form :

             A + ~B + ~C is a Horn Clause,

While one of the form :

            A + B + ~C is non Horn.

However, my problem comes when trying to determine if the
following Clause is Horn or non-Horn.
!







                           ------------\
                          /          ←  \
                         /←←←←←←←←← / \←←**
                        ←#        #      **
                       (←   o   o ←)        ←←←←←←←←←←
                         xx   !  xx        ! HO HO HO !
                         xxx \←/xxx      ←←/-----------
                         xxxxxxxxxx

Happy Holidays Everyone!

-- Binayak Banerjee
{bpa!astrovax!burdvax}!sjuvax!bbanerje

------------------------------

Date: 11/23/83 11:48:29
From: AGRE
Subject: John Batali at the AI Revolving Seminar 30 November

                      [Forwarded by SASW@MIT-MC]

John Batali
Trying to build an introspective problem-solver

Wednesday 30 November at 4PM
545 Tech Sq 8th floor playroom

Abstract:

I'm trying to write a program that understands how it works, and uses
that understanding to modify and improve its performance.  In this
talk, I'll describe what I mean by "an introspective problem-solver",
discuss why such a thing would be useful, and give some ideas about
how one might work.

We want to be able to represent how and why some course of action is
better than another in certain situations.  If we take reasoning to be
a kind of action, then we want to be able to represent considerations
that might be relevant during the process of reasoning.  For this
knowledge to be useful the program must be able to reason about itself
reasoning, and the program must be able to affect itself by its
decisions.

A program built on these lines cannot think about every step of its
reasoning -- because it would never stop thinking about "how to think
about" whatever it is thinking about.  On the other hand, we want it
to be possible for the program to consider any and all of its
reasoning steps.  The solution to this dilemma may be a kind of
"virtual reasoning" in which a program can exert reasoned control over
all aspects of its reasoning process even if it does not explicitly
consider each step.  This could be implemented by having the program
construct general reasoning plans which are then run like programs in
specific situations.  The program must also be able to modify
reasoning plans if they are discovered to be faulty.  A program with
this ability could then represent itself as an instance of a reasoning
plan.

Brian Smith's 3-LISP achieves what he calls "reflective" access and
causal connection: A 3-LISP program can examine and modify the state
of its interpreter as it is running.  The technical tricks needed to
make this work will also find their place in an introspective
problem-solver.

My work has involved trying to make sense of these issues, as well as
working on a representation of planning and acting that can deal with
real world goals and constraints as well as with those of the planning
and plan-execution processes.

------------------------------

Date: 25 Nov 1983 1413-PST
From: Rob-Kling <Kling.UCI-20B@Rand-Relay>
Subject: Social Impacts Graduate Program at UC-Irvine


                                     CORPS

                                    -------

                             A Graduate Program on

                 Computing, Organizations, Policy, and Society

                    at the University of California, Irvine


          This interdisciplinary program at the University of California,
     Irvine provides an opportunity for scholars and students to
     investigate the social dimensions of computerization in a setting
     which supports reflective and sustained inquiry.

          The primary educational opportunities are a PhD programs in the
     Department of Information and Computer Science (ICS) and MS and PhD
     programs in the Graduate School of Management (GSM).  Students in each
     program can specialize in studying the social dimensions of computing.
     Several students have recieved graduate degrees from ICS and GSM for
     studying topics in the CORPS program.

          The faculty at Irvine have been active in this area, with many
     interdisciplinary projects, since the early 1970's.  The faculty and
     students in the CORPS program have approached them with methods drawn
     from the social sciences.

          The CORPS program focuses upon four related areas of inquiry:

      1.  Examining the social consequences of different kinds of
          computerization on social life in organizations and in the larger
          society.

      2.  Examining the social dimensions of the work and industrial worlds
          in which computer technologies are developed, marketed,
          disseminated, deployed, and sustained.

      3.  Evaluating the effectiveness of strategies for managing the
          deployment and use of computer-based technologies.

      4.  Evaluating and proposing public policies which facilitate the
          development and use of computing in pro-social ways.


          Studies of these questions have focussed on complex information
     systems, computer-based modelling, decision-support systems, the
     myriad forms of office automation, electronic funds transfer systems,
     expert systems, instructional computing, personal computers, automated
     command and control systems, and computing at home.  The questions
     vary from study to study.  They have included questions about the
     effectiveness of these technologies, effective ways to manage them,
     the social choices that they open or close off, the kind of social and
     cultural life that develops around them, their political consequences,
     and their social carrying costs.

          The CORPS program at Irvine has a distinctive orientation -

     (i) in focussing on both public and private sectors,

     (ii) in examining computerization in public life as well as within
           organizations,

     (iii) by examining advanced and common computer-based technologies "in
           vivo" in ordinary settings, and

     (iv) by employing analytical methods drawn from the social sciences.



              Organizational Arrangements and Admissions for CORPS


          The primary faculty in the CORPS program hold appointments in the
     Department of Information and Computer Science and the Graduate School
     of Management.  Additional faculty in the School of Social Sciences,
     and the Program on Social Ecology, have collaborated in research or
     have taught key courses for students in the CORPS program.  Research
     is administered through an interdisciplinary research institute at UCI
     which is part of the Graduate Division, the Public Policy Research
     Organization.

     Students who wish additional information about the CORPS program
     should write to:

               Professor Rob Kling (Kling.uci-20b@rand-relay)
               Department of Information and Computer Science
               University of California, Irvine
               Irvine, Ca. 92717

                                     or to:

               Professor Kenneth Kraemer
               Graduate School of Management
               University of California, Irvine
               Irvine, Ca. 92717

------------------------------

End of AIList Digest
********************

∂29-Nov-83  0155	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #105    
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Nov 83  01:55:06 PST
Date: Mon 28 Nov 1983 22:36-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #105
To: AIList@SRI-AI


AIList Digest            Tuesday, 29 Nov 1983     Volume 1 : Issue 105

Today's Topics:
  AI - Challenge & Responses & Query
----------------------------------------------------------------------

Date: 21 Nov 1983 12:25-PST
From: dietz%usc-cse%USC-ECL@SRI-NIC
Reply-to: dietz%USC-ECL@SRI-NIC
Subject: Re: The AI Challenge

I too am skeptical about expert systems.  Their attraction seems to be
as a kind of intellectual dustbin into which difficulties can be swept.
Have a hard problem that you don't know (or that no one knows) how to
solve?  Build an expert system for it.

Ken Laws' idea of an expert system as a very modular, hackable program
is interesting.  A theory or methodology on how to hack programs would
be interesting and useful, but would become another AI spinoff, I fear.

------------------------------

Date: Wed 23 Nov 83 18:02:11-PST
From: Michael Walker <WALKER@SUMEX-AIM.ARPA>
Subject: response to response to challenge

Tom,

        I thought you made some good points in your response to Ralph
Johnson in the AIList, but one of your claims is unsupported, important,
and quite possibly wrong. The claim I refer to is

        "Expert systems can be built, debugged, and maintained more cheaply
        than other complicated systems. And hence, they can be targeted at
        applications for which previous technology was barely adequate."

        I would be delighted if this could be shown to be true, because I
would very much like to show friends/clients in industry how to use AI to
solve their problems more cheaply.

        However, there are no formal studies that compare a system built
using AI methods to one built using other methods, and no studies that have
attempted to control for other causes of differences in ease of building,
debugging, maintaining, etc. such as differences in programmer experience,
programming language, use or otherwise of structured programming techniques,
etc..

        Given the lack of controlled, reproducible tests of the effectiveness
of AI methods for program development, we have fallen back on qualitative,
intuitive arguments. The same sort of arguments have been and are made for
structured programming, application generators, fourth-generation languages,
high-level languages, and ADA. While there is some truth in the various
claims about improved programmer productivity they have too often been
overblown as The Solution To All Our Problems. This is the case with
claiming AI is cheaper than any other methods.

        A much more reasonable statement is that AI methods may turn out
to be cheaper / faster / otherwise better than  other methods if anyone ever
actually builds an effective and economically viable expert system.

        My own guess is that it is easier to develop AI systems because we
have been working in a LISP programming environment that has provided tools
like interpreted code, interactive debugging/tracing/editing, masterscope
analysis, etc.. These points were made quite nicely in Beau Shiel's recent
article in Datamation (Power Tools for Programming, I think was the title).
None of these are intrinsic to AI.

        Many military and industry managers who are supporting AI work are
going to be very disillusioned in a few years when AI doesn't deliver what
has been promised. Unsupported claims  about the efficacy of AI aren't going
to help. It could hurt our credibility, and thereby our funding and ability
to continue the basic research.

Mike Walker
WALKER@SUMEX-AIM.ARPA

------------------------------

Date: Fri 25 Nov 83 17:40:44-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Re: response to response to challenge

Mike,

While I would certainly welcome the kinds of controlled studies that
you sketched in your msg, I think my claim is correct and can be
supported.  Virtually every expert system that has been built has been
targeted at tasks that were previously untouched by computing
technology.  I claim that the reason for this is that the proper
programming methodology was needed before these tasks could be
addressed.  I think the key parts of that methodology are (a) a
modular, explicit representation of knowledge, (b) careful separation
of this knowledge from the inference engine, and (c) an
expert-centered approach in which extensive interviews with experts
replace attempts by computer people to impose a normative,
mathematical theory on the domain.

Since there are virtually no cases where expert systems and
"traditional" systems have been built to perform the same task, it is
difficult to support this claim.  If we look at the history of
computers in medicine, however, I think it supports my claim.
Before expert systems techniques were available, many people
had attempted to build computational tools for physicians.  But these
tools suffered from the fact that they were often burdened with
normative theories and often ignored the clinical aspects of disease
diagnosis.  I blame these deficiencies on the lack of an
"expert-centered" approach.  These programs were also difficult to
maintain and could not produce explanations because they did not
separate domain knowledge from the inference engine.

I did not claim anywhere in my msg that expert systems techniques are
"The Solution to All Our Problems".  Certainly there are problems for
which knowledge programming techniques are superior.  But there are
many more for which they are too expensive, too slow, or simply
inappropriate.  It would be absurd to write an operating system in
EMYCIN, for example!  The programming advances that would allow
operating systems to be written and debugged easily are still
undiscovered.

You credit fancy LISP environments for making expert systems easy to
write, debug, and maintain.  I would certainly agree: The development
of good systems for symbolic computing was an essential prerequisite.
However, the level of program description and interpretation in EMYCIN
is much higher than that provided by the Interlisp system.  And the
"expert-centered" approach was not developed until Ted Shortliffe's
dissertation.

You make a very good point in your last paragraph:

        Many military and industry managers who are supporting AI work
        are going to be very disillusioned in a few years when AI
        doesn't deliver what has been promised. Unsupported claims
        about the efficacy of AI aren't going to help. It could hurt
        our credibility, and thereby our funding and ability to
        continue the basic research.

AI (at least in Japan) has "promised" speech understanding, language
translation, etc. all under the rubric of "knowledge-based systems".
Existing expert-systems techniques cannot solve these problems.  We
need much more research to determine what things CAN be accomplished
with existing technology.  And we need much more research to continue
the development of the technology.  (I think these are much more
important research topics than comparative studies of expert-systems
technology vs. other programming techniques.)

But there is no point in minimizing our successes.  My original
message was in response to an accusation that AI had no merit.
I chose what I thought was AI's most solid contribution: an improved
programming methodology for a certain class of problems.

--Tom

------------------------------

Date: Fri 25 Nov 83 17:52:47-PST
From: Tom Dietterich <DIETTERICH@SUMEX-AIM.ARPA>
Subject: Re: Clarifying my "AI Challange"

Although I've written three messages on this topic already, I guess
I've never really addressed Ralph Johnson's main question:

        My question, though, is whether AI is really going to change
        the world any more than the rest of computer science is
        already doing.  Are the great promises of AI going to be
        fulfilled?

My answer: I don't know.  I view "the great promises" as goals, not
promises.  If you are a physicalist and believe that human beings are
merely complex machines, then AI should in principle succeed.
However, I don't know if present AI approaches will turn out to be
successful.  Who knows?  Maybe the human brain is too complex to ever
be understood by the human brain.  That would be interesting to
demonstrate!

--Tom

------------------------------

Date: 24 Nov 83 5:00:32-PST (Thu)
From: pur-ee!uiucdcs!smu!leff @ Ucb-Vax
Subject: Re: The AI Challenge - (nf)
Article-I.D.: uiucdcs.4118


There was a recent discussion of an AI project that was done at
ONR on determining the cause of a chemical spill in a large chemical
plant with various ducts and pipes and manholes, etc.  I argued that
the thing was just an application of graph algorithms and searching
techniques.

(That project was what could be done in three days by an AI team as
part of a challenge from ONR and quite possibly is not representative.)

Theorem proving using resolution is something that someone with just
a normal algorithms background would not simply come up with 'as an
application of normal algorithms.'  Using if-then rules perhaps might
be a search of the type you might see an algorithms book.  Although, I
don't expect the average CS person with a background in algorithms to
come up with that application although once it was pointed out it would
be quite intuitive.

One interesting note is that although most of the AI stuff is done in
LISP, a big theorem proving program discussed by Wos at a recent IEEE
meeting here was written in PASCAL.  It did some very interesting things.
One point that was made is that they submitted a paper to a logic journal.
Although the journal agreed the results were worth publishing, the "computer
stuff" had to go.

Continuing on this rambling aside, some people submitted results in
mechanical engineering using a symbolic manipulator referencing the use
of the program in a footnote.  The poor referee conscientiously
tried to duplicate the derivations manually.  Finally he noticed the
reference and sent a letter back saying that they must put symbolic
manipulation by computer in the covering.

Getting back to the original subject, I had a discussion with someone
doing research in daemons.  After he explained to me what daemons were,
I came to the conclusion they were a fancy name for what you described
as a hack.  A straightforward application of theorem proving or if-then
rule techniques would be inefficient or otherwise infeasable so one
puts an exception in to handle a certain kind of a case.  What is the
difference between that an error handler for zero divides rather than
putting a statement everywhere one does a division?

Along the subject of hacking, a DATAMATION article on 'Real Programmers
Don't Use PASCAL.' in which he complained about the demise of the person
who would modify a program on the fly using the switch register, etc.
He remarkeed at the end that some of the debugging techniques in
LISP AI environments were starting to look like the old style techniques
of assembler hackers.

------------------------------

Date: 24 Nov 83 22:29:44-PST (Thu)
From: pur-ee!notes @ Ucb-Vax
Subject: Re: The AI Challenge - (nf)
Article-I.D.: pur-ee.1148

As an aside to this discussion, I'm curious as to just what everyone
thinks of when they think of AI.

I am a student at Purdue, which has absolutely nothing in the way of
courses on what *I* consider AI.  I have done a little bit of reading
on natural language processing, but other than that, I haven't had
much of anything in the way of instruction on this stuff, so maybe I'm
way off base here, but when I think of AI, I primarily think of:

        1) Natural Language Processing, first and foremost.  In
           this, I include being able to "read" it and understand
           it, along with being able to "speak" it.
        2) Computers "knowing" things - i.e., stuff along the
           lines of the famous "blocks world", where the "computer"
           has notions of pyramids, boxes, etc.
        3) Computers/programs which can pass the Turing test (I've
           always thought that ELIZA sort of passes this test, at
           least in the sense that lots of people actually think
           the computer understood their problems).
        4) Learning programs, like the tic-tac-toe programs that
           remember that "that" didn't work out, only on a much
           more grandiose scale.
        5) Speech recognition and understanding (see #1).

For some reason, I don't think of pattern recognition (like analyzing
satellite data) as AI.  After all, it seems to me that this stuff is
mostly just "if <cond 1> it's trees, if <cond 2> it's a road, etc.",
which doesn't really seem like "intelligence".

  [If it were that easy, I'd be out of a job.  -- KIL]

What do you think of when I say "Artificial Intelligence"?  Note that
I'm NOT asking for a definition of AI, I don't think there is one.  I
just want to know what you consider AI, and what you consider "other"
stuff.

Another question -- assuming the (very) hypothetical situation where
computers and their programs could be made to be "infinitely" intelligent,
what is your "dream program" that you'd love to see written, even though
it realistically will probably never be possible?  Jokingly, I've always
said that my dream is to write a "compiler that does what I meant, not
what I said".

--Dave Curry
decvax!pur-ee!davy
eevax.davy@purdue

------------------------------

End of AIList Digest
********************

∂29-Nov-83  1837	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #106    
Received: from SRI-AI by SU-AI with TCP/SMTP; 29 Nov 83  18:36:23 PST
Date: Tue 29 Nov 1983 12:50-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #106
To: AIList@SRI-AI


AIList Digest           Wednesday, 30 Nov 1983    Volume 1 : Issue 106

Today's Topics:
  Conference - Logic Conference Correction,
  Intelligence - Definitions,
  AI - Definitions & Research Methodology & Jargon,
  Seminar - Naive Physics
----------------------------------------------------------------------

Date: Mon 28 Nov 83 22:32:29-PST
From: PEREIRA@SRI-AI.ARPA
Subject: Correction

The ARPANET address in the announcement of the IEEE 1984 Logic Programming
Symposium should be PEREIRA@SRI-AI, not PERIERA@SRI-AI.

Fernando Pereira

[My apologies.  I am the one who inserted Dr. Pereira's name incorrectly.
I was attempting to insert information from another version of the same
announcement that also reached the AIList mailbox.  -- KIL]

------------------------------

Date: 21 Nov 83 6:04:05-PST (Mon)
From: decvax!mcvax!enea!ttds!alf @ Ucb-Vax
Subject: Re: Behavioristic definition of intelligence
Article-I.D.: ttds.137

Doesn't the concept "intelligence" have some characteristics in common with
a concept such as "traffic"?  It seems obvious that one can measure such
entities as "traffic intensity" and the like thereby gaining an indirect
understanding of the conditions that determine the "traffic" but it seems
very difficult to find a direct measure of "traffic" as such.  Some may say
that "traffic" and "traffic intensity" are synonymous concepts but I don't
agree.  The common opinion among psychologists seems to be that
"intelligence" is that which is measured by an intelligence test.  By
measuring a set of problem solving skills and weighing the results together
we get a value.  Why not call it "intelligence" ?  The measure could be
applicable to machine intelligence also as soon as (if ever) we teach the
machines to pass intelligence tests.  It should be quite clear that
"intelligence" is not the same as "humanness" which is measured by a Turing
test.

------------------------------

Date: Sat, 26 Nov 83 2:09:14 EST
From: A B Cooper III <abc@brl-bmd>
Subject: Where wise men fear to tread

Being nothing more than an amateur observer on the AI scene,
I hesitate to plunge in like a fool.

Nevertheless, the roundtable on what constitutes intelligence
seems ed to cover many interesting hypotheses:

        survivability
        speed of solving problems
        etc

but one.  Being married to a professional educator, I've found
that the common working definition of intelligence is
the ability TO LEARN.

                The more easily one learns new material, the
                        more intelligent one is said to be.

                The more quickly one learns new material,
                        the more intelligent one is said to be.

                One who can learn easily and quickly across a
                        broad spectrum of subjects is said to
                        be more intelligent than one whose
                        abilities are concentrated in one or
                        two areas.

                One who learns only at an average rate, except
                        for one subject area in which he or she
                        excells far above the norms is thought
                        to be TALENTED rather than INTELLIGENT.

                It seems to be believed that the most intelligent
                        folks learn easily and rapidly without
                        regard to the level of material.  They
                        assimilate the difficult with the easy.


Since this discussion was motivated, at least in part, by the
desire to understand what an "intelligent" computer program should
do, I feel that we should re-visit some of our terminology.

In the earlier days of Computer Science, I seem to recall some
excitement about machines (computers) that could LEARN.  Was this
the precursor of AI?  I don't know.

If we build an EXPERT SYSTEM, have we built an intelligent machine
(can it assimilate new knowledge easily and quickly), or have we
produced a "dumb" expert?  Indeed, aren't many of our AI or
knowledge-based or expert systems really something like "dumb"
experts?

                       ------------------------

You might find the following interesting:

        Siegler, Robert S, "How Knowledge Influences Learning,"
AMERICAN SCIENTIST, v71, Nov-Dec 1983.

In this reference, Siegler addresses the questions of how
children  learn and what they know.  He points out that
the main criticism of intelligence tests (that they measure
'knowledge' and not 'aptitute') may miss the mark--that
knowledge and learning may be linked, in humans anyway, in
ways that traditional views have not considered.

                      -------------------------

In any case, should we not be addressing as a primary research
objective, how to make our 'expert systems' into better learners?

Brint Cooper
abc@brl.arpa

------------------------------

Date: 23 Nov 83 11:27:34-PST (Wed)
From: dambrosi @ Ucb-Vax
Subject: Re: Intelligence
Article-I.D.: ucbvax.373

Hume once said that when a discussion or argument seems to be
interminable and without discernable progress, it is worthwhile
to attempt to produce a concrete visualisation of the concept
being argued about. Often, he claimed, this will be IMPOSSIBLE
to do, and this will be evidence that the word being argued
about is a ringer, and the discussion pointless. In more
modern parlance, these concepts are definitionally empty
for most of us.
I submit the following definition as the best presently available:
Intelligence consists of perception of the external environment
(e.g. vision), knowledge representation, problem solving, learning,
interaction with the external environment (e.g. robotics),
and communication with other intelligent agents (e.g. natural
language understanding). (note the conjunctive connector)
If you can't guess where this comes from, check AAAI83
procedings table of contents.
                                bruce d'ambrosio
                                dambrosi%ucbernie@berkeley

------------------------------

Date: Tuesday, 29 Nov 1983 11:43-PST
From: narain@rand-unix
Subject: Re: AI Challenge


AI is advanced programming.

We need to solve complex problems involving reasoning, and judgment. So
we develop appropriate computer techniques (mainly software)
for that. It is our responsibility to invent techniques that make development
of efficient intelligent computer programs easier, debuggable, extendable,
modifiable. For this purpose it is only useful to learn whatever we can from
traditional computer science and apply it to the AI effort.

Tom Dietterich said:

>> Your view of "knowledge representations" as being identical with data
>> structures reveals a fundamental misunderstanding of the knowledge vs.
>> algorithms point.  Most AI programs employ very simple data structures
>> (e.g., record structures, graphs, trees).  Why, I'll bet there's not a
>> single AI program that uses leftist-trees or binomial queues!  But, it
>> is the WAY that these data structures are employed that counts.

We at Rand have ROSS (Rule Oriented Simulation System) that has been employed
very successfully for developing two large scale simulations (one strategic
and one tactical). One implementation of ROSS uses leftist trees for
maintaining event queues. Since these queues are in the innermost loop
of ROSS's operation, it was only sensible to make them as efficient as
possible. We think we are doing AI.

Sanjai Narain
Rand Corp.

------------------------------

Date: Tue, 29 Nov 83 11:31:54 PST
From: Michael Dyer <dyer@UCLA-CS>
Subject: defining AI, AI research methodology, jargon in AI (long msg)

This is in three flaming parts:   (I'll probably never get up the steam to
respond again,  so I'd better get it all out at once.)

Part I.  "Defining intelligence", "defining AI" and/or "responding to AI
challenges" considered harmful:  (enough!)

Recently, I've started avoiding/ignoring AIList since, for the most
part, it's been a endless discussion on "defining A/I" (or, most
recently) defending AI.  If I spent my time trying to "define/defend"
AI or intelligence, I'd get nothing done.  Instead, I spend my time
trying to figure out how to get computers to achieve some task -- exhibit
some behavior -- which might be called intelligent or human-like.
If/whenever I'm partially successful, I try to keep track about what's
systematic or insightful.  Both failure points and partial success
points serve as guides for future directions.  I don't spend my time
trying to "define" intelligence by BS-ing about it.  The ENTIRE
enterprise of AI is the attempt to define intelligence.

Here's a positive suggestion for all you AIList-ers out there:

I'd be nice to see more discussion of SPECIFIC programs/cognitive
models:  their Assumptions, their failures, ways to patch them, etc. --
along with contentful/critical/useful suggestions/reactions.

Personally, I find Prolog Digest much more worthwhile.  The discussions
are sometimes low level, but they almost always address specific issues,
with people often offering specific problems, code, algorithms, and
analyses of them.  I'm afraid AIList has been taken over by people who
spend so much time exchanging philosophical discussions that they've
chased away others who are very busy doing research and have a low BS
tolerance level.

Of course, if the BS is reduced, that means that the real AI world will
have to make up the slack.  But a less frequent digest with real content
would be a big improvement.  {This won't make me popular, but perhaps part
of the problem is that most of the contributors seem to be people who
are not actually doing AI, but who are just vaguely interested in it, so
their speculations are ill-informed and indulgent.  There is a use for
this kind of thing, but an AI digest should really be discussing
research issues.  This gets back to the original problem with this
digest -- i.e. that researchers are not using it to address specific
research issues which arise in their work.}

Anyway, here are some examples of task/domains topic that could be
addressed.  Each can be considered to be of the form:  "How could we get
a computer to do X":

          Model Dear Abby.
          Understand/engage in an argument.
          Read an editorial and summarize/answer questions about it.
          Build a daydreamer
          Give legal advice.
          Write a science fiction short story
               ...

{I'm an NLP/Cognitive modeling person -- that's why my list may look
bizzare to some people}

You researchers in robotics/vision/etc.  could discuss, say, how to build
a robot that can:

          climb stairs
             ...
          recognize a moving object
             ...
          etc.

People who participate in this digest are urged to:  (1) select a
task/domain, (2) propose a SPECIFIC example which represents
PROTOTYPICAL problems in that task/domain, (3) explain (if needed) why
that specific example is prototypic of a class of problems, (4) propose
a (most likely partial) solution (with code, if at that stage), and 4)
solicit contentful, critical, useful, helpful reactions.

This is the way Prolog.digest is currently functioning, except at the
programming language level.  AIList could serve a useful purpose if it
were composed of ongoing research discussions about SPECIFIC, EXEMPLARY
problems, along with approaches, their limitations, etc.

If people don't think a particular problem is the right one, then they
could argue about THAT.  Either way, it would elevate the level of
discussion.  Most of my students tell me that they no longer read
AIList.  They're turned off by the constant attempts to "defend or
define AI".

Part II.  Reply to R-Johnson

Some of R-Johnson's criticisms of AI seem to stem from viewing
AI strictly as a TOOLS-oriented science.

{I prefer to refer to STRUCTURE-oriented work (ie content-free) as
TOOLS-oriented work and CONTENT-oriented work as DOMAIN or
PROCESS-oriented.  I'm referring to the distinction that was brought up
by Schank in "The Great Debate" with McCarthy at AAAI-83 Wash DC).

In general,  tools-oriented work seems more popular and accepted
than content/domain-oriented work.  I think this is because:

     1.  Tools are domain independent, so everyone can talk about them
     without having to know a specific domain -- kind of like bathroom
     humor being more universally communicable than topical-political
     humor.

     2.  Tools have nice properties:  they're general (see #1 above);
     they have weak semantics (e.g. 1st order logic, lambda-calculus)
     so they're clean and relatively easy to understand.

     3.  No one who works on a tool need be worried about being accused
     of "ad hocness".

     4.  Breakthroughs in tools-research happen rarely,  but when it
     does,  the people associated with the breakthrough become
     instantly famous because everyone can use their tool (e.g. Prolog)

In contrast, content or domain-oriented research and theories suffer
from the following ills:

     1.  They're "ad hoc" (i.e.  referring to THIS specific thing or
     other).

     2.  They have very complicated semantics,  poorly understood,
     hard to extend, fragile, etc. etc.

However,  many of the most interesting problems pop up in trying
to solve a specific problem which, if solved,  would yield insight
into intelligence.  Tools, for the most part, are neutral with respect
to content-oriented research questions.  What does Prolog or Lisp
have to say to me about building a "Dear Abby" natural language
understanding and personal advice-giving program?  Not much.
The semantics of lisp or prolog says little about the semantics of the
programs which researchers are trying to discover/write in Prolog or Lisp.
Tools are tools.  You take the best ones off the shelf you can find for
the task at hand.  I love tools and keep an eye out for
tools-developments with as much interest as anyone else.  But I don't
fool myself into thinking that the availability of a tool will solve my
research problems.

{Of course no theory is exlusively one or the other.  Also, there are
LEVELS of tools & content for each theory.  This levels aspect causes
great confusion.}

By and large, AIList discussions (when they get around to something
specific) center too much around TOOLS and not PROCESS MODELS (ie
SPECIFIC programs, predicates, rules, memory organizations, knowledge
constructs, etc.).

What distinguishes AI from compilers, OS, networking, or other aspects
of CS are the TASKS that AI-ers choose.  I want computers that can read
"War and Peace" -- what problems have to be solved, and in what order,
to achieve this goal?  Telling me "use logic" is like telling me
to "use lambda calculus" or "use production rules".

Part III.   Use and abuse of jargon in AI.

Someone recently commented in this digest on the abuse of jargon in AI.
Since I'm from the Yale school, and since Yale commonly gets accused of
this, I'm going to say a few words about jargon.

Different jargon for the same tools is BAD policy.  Different jargon
to distinguish tools from content is GOOD policy.  What if Schank
had talked about "logic"  instead of "Conceptual Dependencies"?
What a mistake that would have been!  Schank was trying to specify
how specific meanings (about human actions) combine during story
comprehension.  The fact that prolog could be used as a tool to
implement Schank's conceptual dependencies is neutral with respect
to what Schank was trying to do.

At IJCAI-83  I heard a paper (exercise for the reader to find it)
which went something like this:

     The work of Dyer (and others) has too many made-up constructs.
     There are affects, object primitives, goals, plans, scripts,
     settings, themes, roles, etc.  All this terminology is confusing
     and unnecessary.

     But if we look at every knowledge construct as a schema (frame,
     whatever term you want here), then we can describe the problem much
     more elegantly.  All we have to consider are the problems of:
     frame activation, frame deactivation, frame instantiation, frame
     updating, etc.

Here, clearly we have a tools/content distinction.  Wherever
possible I actually implemented everything using something like
frames-with-procedural-attachment (ie demons).  I did it so that I
wouldn't have to change my code all the time.  My real interest,
however, was at the CONTENT level.  Is a setting the same as an emotion?
Does the task:  "Recall the last 5 restaurant you were at" evoke the
same search strategies as "Recall the last 5 times you accomplished x",
or "the last 5 times you felt gratitude."?  Clearly, some classes of
frames are connected up to other classes of frames in different ways.
It would be nice if we could discover the relevant classes and it's
helpful to give them names (ie jargon).  For example, it turns out that
many (but not all) emotions can be represented in terms of abstract goal
situations.  Other emotions fall into a completely different class (e.g.
religious awe, admiration).  In my program "love" was NOT treated as
(at the content level) an affect.

When I was at Yale, at least once a year some tools-oriented person
would come through and give a talk of the form:  "I can
represent/implement your Scripts/Conceptual-Dependency/
Themes/MOPs/what-have-you using my tool X" (where X = ATNs, Horn
clauses, etc.).

I noticed that first-year students usually liked such talks, but the
advanced students found them boring and pointless.  Why?  Because if
you're content-oriented you're trying to answer a different set of
questions, and discussion of the form:  "I can do what you've already
published in the literature using Prolog" simply means "consider Prolog
as a nice tool" but says nothing at the content level, which is usually
where the advanced students are doing their research.

I guess I'm done.  That'll keep me for a year.

                                                  -- Michael Dyer

------------------------------

Date: Mon 28 Nov 83 08:59:57-PST
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: CS Colloq 11/29: John Seely Brown

                [Reprinted from the SU-SCORE bboard.]

Tues, Nov 29, 3:45 MJH refreshments; 4:15 Terman Aud (lecture)

A COMPUTATIONAL FRAMEWORK FOR A QUALITATIVE PHYSICS--
Giving computers "common-sense" knowledge about physical mechanisms

John Seely Brown
Cognitive Sciences
Xerox, Palo Alto Research Center

Humans appear to use a qualitative causal calculus in reasoning about
the behavior  of their physical environment.   Judging from the kinds
of  explanations humans give,  this calculus is  quite different from
the classical physics taught in classrooms.  This raises questions as
to  what this  (naive) physics  is like, how  it helps  one to reason
about the physical world and  how to construct a formal calculus that
captures this kind of  reasoning.  An analysis of this calculus along
with a system, ENVISION, based on it will be covered.

The goals  for the qualitative physics are i)  to be far simpler than
classical  physics and  yet  retain  all the  important  distinctions
(e.g., state,  oscillation, gain,  momentum), ii)  to produce  causal
accounts of  physical mechanisms,  and  (3) to  provide a  logic  for
common-sense, causal  reasoning  for the  next generation  of  expert
systems.

A new  framework for  examining causal  accounts has  been  suggested
based  on using  collections  of  locally interacting  processors  to
represent physical mechanisms.

------------------------------

End of AIList Digest
********************

∂02-Dec-83  0153	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #107    
Received: from SRI-AI by SU-AI with TCP/SMTP; 2 Dec 83  01:53:32 PST
Date: Thu  1 Dec 1983 21:58-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #107
To: AIList@SRI-AI


AIList Digest             Friday, 2 Dec 1983      Volume 1 : Issue 107

Today's Topics:
  Programming Languages - Lisp Productivity,
  Alert - Psychology Today,
  Learning & Expert Systems,
  Intelligence - Feedback Model & Categorization,
  Scientific Method - Psychology,
  Puzzle - The Lady or the Tiger,
  Seminars - Commerce Representation & Learning Linguistic Categories
----------------------------------------------------------------------

Date: 27 Nov 83 16:57:39-PST (Sun)
From: decvax!tektronix!tekcad!franka @ Ucb-Vax
Subject: Re: lisp productivity question - (nf)
Article-I.D.: tekcad.145

        I don't have any documentation, but I heard once from an attendee
at a workshop on design automation that someone had reported a 5:1 productivity
improvement in LISP vs. C, PASCAL, etc. From personal experience I know this
to be true, also. I once wrote a game program in LISP in two days. I later
spent two weeks debugging the same game in a C version (I estimated another
factor of 4 for a FORTRAN version). The nice thing about LISP is not that
the amount of code written is less (although it is, usually by a factor of
2 to 3), but that its environment (even in the scrungy LISPs) is much easier
to debug and modify code in.

                                        From the truly menacing,
   /- -\                                but usually underestimated,
    <->                                 Frank Adrian
                                        (tektronix!tekcad!franka)

[A caveat: Lisp is very well suited to the nature of game programs.
A fair test would require that data processing and numerical analysis
problems be included in the mix of test problems.  -- KIL]

------------------------------

Date: Mon, 28 Nov 83 11:03 EST
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
Subject: Psychology Today

The December issue of Psychology Today (V 17, #12) has some more articles
that may be of interest to AI people. The issue is titled "USER FRIENDLY"
and talks about technological advances that have made machines easier.

The articles of interest are:

On Papert, Minsky, and John Anderson            page 26

An Article written by McCarthy                  page 46

An Interview with Alan Kay                      Page 50

(why they call him the Grand old Man is
 beyond me, Alan is only 43)


                                        - steve

------------------------------

Date: Tue 29 Nov 83 18:36:01-EST
From: Albert Boulanger <ABOULANGER@BBNG.ARPA>
Subject: Learning Expert systems

Re: Brint Cooper's remark on non-learning expert systems being "dumb":

Yes, some people would agree with you. In fact, Dr. R.S. Michalski's group
at the U of Illinois is building an Expert System, ADVISE, that incorporates
learning capabilities.

Albert Boulanger
ABOULANGER@BBNG

------------------------------

Date: Wed, 30 Nov 83 09:07 PST
From: NNicoll.ES@PARC-MAXC.ARPA
Subject: "Intelligence"

I see Intelligence as the sophistication of the deep structure
mechanisms that generate both thought and behavior.  These structures
(per Albus), work as cross-coupled hierarchies of phase-locked loops,
generating feedback hypotheses about the stimulus at each level of the
hierarchy.  These feedback hypotheses are better at predicting and
matching the stimulus if the structure holds previous patterns that are
similar to the present stimulus.  Therefore, intelligence is a function
of both the amount of knowledge possible to bring to bear on pattern
matching a present problem (inference), and the number of levels in the
structure of the hierarchy the organism (be it mechanical or organic)
can bring to bear on breaking the stimulus/pattern down into its
component parts and generate feedback hypotheses to adjust the organisms
response at each level.

I feel any structure sufficiently complex to exhibit intelligence, be it
a bird-brained idiot whose height of reasoning is "find fish - eat
fish", or "Deep Thought" who can break down the structures and reason
about a whole world, should be considered intelligent, but with
different "amounts" of intelligence, and possibly about different
experiences.  I do not think there is any "threshold" above which an
organism can be considered intelligent and below which they are not.
This level would be too arbitrary a structure for anything except very
delimited areas.

So, lets get on with the pragmatic aspects of this work, creating better
slaves to do our scut work for us, our reasoning about single-mode
structures too complex for a human brain to assimilate, our tasks in
environments too dangerous for organic creatures, and our tasks too
repetitious for the safety of the human brain/body structure, and move
to a lower priority the re-creation of pseudo-human "intelligence".  I
think that would require a pseudo-human brain structure (combining both
"Emotion" and "Will") that would be interesting only in research on
humanity (create a test-bed wherein experiments that are morally
unacceptable when performed on organic humans could be entertained).

Nick Nicoll

------------------------------

Date: 29 Nov 83 20:47:33-PST (Tue)
From: decvax!ittvax!dcdwest!sdcsvax!sdcsla!west @ Ucb-Vax
Subject: Re: Intelligence and Categorization
Article-I.D.: sdcsla.461

        From:  AXLER.Upenn-1100@Rand-Relay
          (David M. Axler - MSCF Applications Mgr.)

               I think Tom Portegys' comment in 1:98 is very true.
          Knowing whether or not a thing is intelligent, has a soul,
          etc., is quite helpful in letting us categorize it.  And,
          without that categorization, we're unable to know how to
          understand it.  Two minor asides that might be relevant in
          this regard:

               1) There's a school of thought in the fields of
          linguistics, folklore, anthropology, and folklore, which is
          based on the notion (admittedly arguable) that the only way
          to truly understand a culture is to first record and
          understand its native categories, as these structure both
          its language and its thought, at many levels.  (This ties in
          to the Sapir-Whorf hypothesis that language structures
          culture, not the reverse...)  From what I've read in this
          area, there is definite validity in this approach.  So, if
          it's reasonable to try and understand a culture in terms of
          its categories (which may or may not be translatable into
          our own culture's categories, of course), then it's equally
          reasonable for us to need to categorize new things so that
          we can understand them within our existing framework.

Deciding whether a thing is or is not intelligent seems to be a hairier
problem than "simply" categorizing its behavior and other attributes.

As to point #1, trying to understand a culture by looking at how it
categorizes does not constitute a validation of the process of
categorization (particularly in scientific endeavours).   Restated: There
is no connection between the fact that anthropologists find that studying
a culture's categories is a very powerful tool for aiding understanding,
and the conclusion that we need to categorize new things to understand them.

I'm not saying that categorization is useless (far from it), but Sapir-Whorf's
work has no direct bearing on this subject (in my view).

What I am saying is that while deciding to treat something as "intelligent",
e.g., a computer chess program, may prove to be the most effective way of
dealing with it in "normal life", it doesn't do a thing for understanding
the thing.   If you choose to classify the chess program as intelligent,
what has that told you about the chess program?   If you classify it
as unintelligent...?   I think this reflects more upon the interaction
between you and the chess program than upon the structure of the chess
program.

                        -- Larry West   UC San Diego
                        -- ARPA:        west@NPRDC
                        -- UUCP:        ucbvax!sdcsvax!sdcsla!west
                        --      or      ucbvax:sdcsvax:sdcsla:west

------------------------------

Date: 28 Nov 83 18:53:46-PST (Mon)
From: harpo!eagle!mhuxl!ulysses!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: Rational Psych & Scientific Method
Article-I.D.: ncsu.2416

Well, I hope this is the last time ....

Again, I have been accused of ignorance; again the accustation is false.
Its fortunate only my words can make it into this medium.  I would
appreciate the termination of this discussion, but will not stand by
and be patronized without responding.  All sane and rational people,
hit the <del> and go on to the next news item please.

When I say psychologists do not do very good science I am talking about
the exact same thing you are talking about.  There is no escape. Those
"rigorous" experiments sometime succeed in establishing some "facts",
but they are sufficiently encumbered by lack of controls that one often
does not know what to make of them.  This is not to imply a critisism of
psychologists as intellectually inferior to chemists, but the field is
just not there yet.  Is Linguistics a science?  Is teaching a science?
Laws (and usually morals) prevent the experiments we need, to do REAL
controlled experiments; lack of understanding would probably prevent
immediate progress even in the absence of those laws.  Its a bit like
trying to make a "scientific" study of a silicon wafer with 1850's tools
and understanding of electronics.  A variety of interesting facts could
be established, but it is not clear that they would be very useful.  Tack
on some I/O systems and you could then perhaps allow the collection of
reams of timing and capability data and could try to corrollate the results
and try to build theories -- that LOOKS like science.  But is it? In
my book, to be a science, there must be a process of convergence in which
the theories more ever closer to explaining reality, and the experiments
become ever more precise.  I don't see much convergence in experimental
psychology. I see more of a cyclic nature to the theories ....
----GaryFostel----
		  P.S. There are a few other sciences which do not deserve
		       the title, so don't feel singled out. Computer
		       Science for example.

------------------------------

Date: Tue, 29 Nov 83 11:15 EST
From: Chris Moss <Moss.UPenn@Rand-Relay>
Subject: The Lady or the Tiger

                 [Reprinted from the Prolog Digest.]

Since it's getting near Christmas, here are a few puzzlers to
solve in Prolog. They're taken from Raymond Smullyan's delightful
little book of the above name. Sexist allusions must be forgiven.

There once was a king, who decided to try his prisoners by giving
them a logic puzzle. If they solved it they would get off, and
get a bride to boot; otherwise ...

The first day there were three trials. In all three, the king
explained, the prisoner had to open one of two rooms. Each room
contained either a lady or a tiger, but it could be that there
were tigers or ladies in both rooms.

On each room he hung a sign as follows:

                I                                    II
    In this room there is a lady        In one of these rooms there is
       and in the other room              a lady and in one of these
         there is a tiger                   rooms there is a tiger

"Is it true, what the signs say ?", asked the prisoner.
"One of them is true", replied the king, "but the other one is false"

If you were the prisoner, which would you choose (assuming, of course,
that you preferred the lady to the tiger) ?

                      -------------------------

For the second and third trials, the king explained that either
both statements were true, or both are false. What is the
situation ?

Signs for Trial 2:

                  I                                     II
       At least one of these rooms              A tiger is in the
            contains a tiger                        other room


Signs for Trial 3:

                  I                                     II
      Either a tiger is in this room             A lady is in the
      or a lady is in the other room                other room


Representing the problems is much more difficult than finding the
solutions.  The latter two test a sometimes ignored aspect of the
[Prolog] language.

Have fun !

------------------------------

Date: 27 Nov 1983 20:42:46-EST
From: Mark.Fox at CMU-RI-ISL1
Subject: AI talk

                 [Reprinted from the CMU-AI bboard.]

TITLE:          Databases and the Logic of Business
SPEAKER:        Ronald M. Lee, IIASA Austria & LNEC Portugal
DATE:           Monday, Nov. 28, 1983
PLACE:          MS Auditorium, GSIA

ABSTRACT: Business firms differentiate themsleves with special products,
services, etc.  Nevertheless, commercial activity requires certain
standardized concepts, e.g., a common temporal framework, currency of
exchange, concepts of ownership and contractual obligation.  A logical data
model, called CANDID, is proposed for modelling these standardized aspects
in axiomatic form.  The practical value is the transportability of this
knowledge across a wide variety of applications.

------------------------------

Date: 30 Nov 83 18:58:27 PST (Wednesday)
From: Kluger.PA@PARC-MAXC.ARPA
Reply-to: Kluger.PA@PARC-MAXC.ARPA
Subject: HP Computer Colloquium 12/1/83 

                Professor Roman Lopez de Montaras
                Politecnico Universidade Barcelona

      A Learning System for Linguistic Categorization of Soft
                             Observations

We describe a human-guided feature classification system. A person
teaches the denotation of subjective linguistic feature descriptors to
the system by reference to examples.  The resulting knowledge base of
the system is used in the classification phase for interpetation of
descriptions.

Interpersonal descriptions are communicated via semantic translations of
subjective descriptions.  The advantage of a subjective linguistic
description over more traditional arithmomorphic schemes is their high
descriptor-feature consistency.  This is due to the relative simplicity
of the underlying cognitive process.  This result is a high feature
resolution for the overall cognitive perception and description
processes.

At present the system is still being used for categorization of "soft"
observations in psychological research, but application in any
person-machine system are conceivable.

------------------------------

End of AIList Digest
********************

∂02-Dec-83  2044	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #108    
Received: from SRI-AI by SU-AI with TCP/SMTP; 2 Dec 83  20:44:19 PST
Date: Fri  2 Dec 1983 16:15-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #108
To: AIList@SRI-AI


AIList Digest            Saturday, 3 Dec 1983     Volume 1 : Issue 108

Today's Topics:
  Editorial Policy,
  AI Jargon,
  AI - Challenge Responses,
  Expert Systems & Knowledge Representation & Learning
----------------------------------------------------------------------

Date: Fri 2 Dec 83 16:08:01-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: Editorial Policy

It has been suggested that the volume on this list is too high and the
technical content is too low.  Two people have recently written to me
suggesting that the digest be converted to a magazine format with
perhaps a dozen edited departments that would constitute alternating
special issues.

I appreciate their offers to serve as editors, but have no desire to
change the AIList format.  The volume has been high, but that is
typical of new lists.  I encourage technical contributions, but I do
not wish to discourage general-interest discussions.  AIList provides
a forum for material not appropriate to journals and conferences --
"dumb" questions, requests for information, abstracts of work in
progress, opinions and half-baked ideas, etc.  I do not find these a
waste of time, and attempts to screen any class of "uninteresting"
messages will only deprive those who are interested in them.  A major
strength of AIList is that it helps us develop a common vocabulary for
those topics that have not yet reached the textbook stage.

If people would like to split off their own sublists, I will be glad
to help.  That might reduce the number of uninteresting messages
each reader is exposed to, although the total volume of material would
probably be higher.  Narrow lists do tend to die out as their boom and
bust cycles gradually lengthen, but AIList could serve as the channel
by which members could regroup and recruit new members.  The chief
disadvantage of separate lists is that we would lose valuable
cross-fertilization between disciplines.

For the present, I simply ask that members be considerate when
composing messages.  Be concise, preferably stating your main points
in list form for easy reference.  Remember that electronic messages
tend to seem pugnacious, so that even slight sarcasm may arouse
numerous rebuttals and criticisms.  It is unnecessary to marshall
massive support for every claim since you will have the opportunity to
reply to critics.  Also, please keep in mind that AIList (under my
moderatorship) is primarily concerned with AI and pattern recognition,
not psychology, metaphysics, philosophy of science, or any other topic
that has its own major following.  We welcome any material that
advances the progress of intelligent machines, but the hard-core
discussions from other disciplines should be directed elsewhere.

                                        -- Ken Laws

------------------------------

Date: Tue 29 Nov 83 21:09:12-PST
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: Re: Dyer's flame

    In this life of this list a number of issues, among them intelligence,
parallelism and AI, defense of AI, rational psychology, and others have
been maligned as "pointless" or whatever. Without getting involved in a
debate on "philosophy" vs. "real research", a quick scan of these topics
shows them to be far from pointless. I regret that Dyer's students have
stopped reading this list; perhaps they should follow his advice of submitting
the right type of article to this list.

    As a side note, I am VERY interested in having people outside of mainstream
AI participate in this list; while one sometimes wades through muddled articles
of little value, this is more than repaid by the fresh viewpoints and
occasional gem that would have been otherwise never been found.

    Ken Laws has done an excellent job grouping the articles by interest and
topic; uninterested readers can then skip reading an entire volume, if the
theme is uninteresting. A greater number of articles submitted can only
improve this process; the burden is on those unsatisfied with the content of
this board to submit them. I would welcome submissions of the kind suggested
by Dr. Dyer, and hope that others will follow his advice and try to lead the
board to whatever avenue they think is the most interesting. There's room
here for all of us...

David Rogers
DRogers@SUMEX-AIM.ARPA

------------------------------

Date: Tue 29 Nov 83 22:24:14-PST
From: PEREIRA@SRI-AI.ARPA
Subject: Tools

I agree with Michael Dyer's comments on the lack of substantive
material in this list and on the importance of dealing with
new "real" tasks rather than using old solutions of old problems
to show off one's latest tool. However, I feel like adding two
comments:

1. Some people (me included) have a limited supply of "writing energy"
to write serious technical stuff: papers, proposals and the like.
Raving about generalities, however, consumes much less of that energy
per line than the serious stuff. The people who are busily writing
substantive papers have no energy left to summarize them on the net.

2. Very special tools, in particular fortunate situations
("epiphanies"?!) can bring a new and better level of understanding of a
problem, just by virtue of what can be said with the new tool, and
how. Going the other direction, we all know that we need to change our
tools to suit our problems. The paradigmatic relation between subject
and tool is for me the one between classical physics and mathematical
analysis, where tool and subject are intimately connected but yet
distinct. Nothing of the kind has yet happened in AI (which shouldn't
surprise us, seeing at how long it took to develop that other
relationship...).

Note: Knowing of my involvement with Prolog/logic programming, some
reader of this might be tempted to think "Ahah! what he is really
driving at is that logic/Horn clauses/Prolog [choose one] is that kind
of tool for AI. Let me kill that presumption in the bud, these tool
addicts are dangerous!" Gentle reader, save your flame! Only time will
show whether anything of the kind is the case, and my private view on
the subject is sufficiently complicated (confused?) that if I could
disentangle it and write about it clearly I would have a paper rather
than a net message...

Fernando Pereira

------------------------------

Date: Wed 30 Nov 83 11:58:56-PST
From: Wilkins  <WILKINS@SRI-AI.ARPA>
Subject: jargon

I understand Dyer's comments on what he calls the tool/content distinction.
But it seems to me that the content distinctions he rightly thinks are
important can often be expressed in terms of tools, and that it would be
clearer to do so.  He talked about handling one's last trip to the restaurant
differently from the last time one is in love.  I agree that this is an
important distinction to make.  I would like to see the difference expressed
in "tools", e.g., "when handling a restaurant trip (or some similar class of
events) our system does a chronological search down its list of events, but
when looking for love, it does a best first search on its list of personal
relationships."  This is clearer and communicates more than saying the system
has a "love-MOP" and a "restaurant-script".  This is only a made up example
-- I am not saying Mr. Dyer used the above words or that he does not explain
things well.  I am just trying to construct a non-personal example of the
kind of thing to which I object, but that occurs often in the literature.

------------------------------

Date: Wed, 30 Nov 83 13:47 EST
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
Subject: McCarthy and 'mental' states

In the December Psychology Today John McCarthy has a short article that
raises a fairly contentious point.

In his article he talks about how it is not necessarily a bad thing that
people attribute "human" or what the calls 'mental' attributes to complex
systems. Thus when someone anthropomorphises the actions of his/her
car, boat, or terminal, one is engaging in a legitimate form of description
of a complex process.

Indeed he argues further that while currently most computer programs
can still be understood by their underlying mechanistic properties,
eventually complex expert systems will only be capable of being described
by attributing 'mental' states to them.

                                 ----

I think this is the proliferation of jargon and verbiage that
Ralph Johnson noted is associated with
a large segment of AI work. What has happened is not a discovery or
emulation of cognitive processes, but a break-down of certain weak
programmers' abilities to describe the mechanical characteristics of
their programs. They then resort to arcane languages and to attributing
'mental' characteristics to what are basically fuzzy algorithms that
have been applied to poorly formalized or poorly characterized problems.
Once the problems are better understood and are given a more precise
formal characterization, one no longer needs "AI" techniques.

                                        - Steven Gutfreund

------------------------------

Date: 28 Nov 83 23:04:58-PST (Mon)
From: pur-ee!uiucdcs!uicsl!Anonymous @ Ucb-Vax
Subject: Re: Clarifying my 'AI Challange' - (nf)
Article-I.D.: uiucdcs.4190

re: The Great Promises of AI

Beware the promises of used car salesmen.  The press has stories to
sell, and so do the more extravagant people within AI.  Remember that
many of these people had to work hard to convince grantmakers that AI
was worth their money, back in the days before practical applications
of expert systems began to pay off.

It is important to distinguish the promises of AI from the great
fantasies that have been speculated by the media (and some AI
researchers) in a fit of science fiction.  AI applications will
certainly be diverse and widespread (thanks no less to the VLSI
people).  However, I hope that none of us really believes that machines
will possess human general intelligence any time soon.  We banter about
such stuff hoping that when ideas fly, at least some of them will be
good ones.  The reality is that nobody sees a clear and brightly lit
path from here to super-intelligent robots.  Rather we see hundreds of
problems to be solved.  Each solution should bring our knowledge and
the capabilities of our programs incrementally forward.  But let's not
kid ourselves about the complexity of the problems.  As it has already
been pointed out, AI is tackling the hard problems -- the ones for
which nobody knows any algorithms.

------------------------------

Date: Wed, 30 Nov 83 10:29 PST
From: Tong.PA@PARC-MAXC.ARPA
Subject: Re: AI Challenge

  Tom Dietterich:
  Your view of "knowledge representations" as being identical with data
  structures reveals a fundamental misunderstanding of the knowledge vs.
  algorithms point. . .Why, I'll bet there's not a single AI program that
  uses leftist-trees or binomial queues!

  Sanjai Narain:
  We at Rand have ROSS. . .One implementation of ROSS uses leftist trees for
  maintaining event queues. Since these queues are in the innermost loop
  of ROSS's operation, it was only sensible to make them as efficient as
  possible. We think we are doing AI.

Sanjai, you take the letter but not the spirit of Tom's reflection. I
don't think any AI researcher would object to improving the efficiency
of her program, or using traditional computer science knowledge to help.
But - look at your own description of ROSS development! Clearly you
first conceptualized ROSS ("queues are the innermost loop") and THEN
worried about efficiency in implementing your conceptualization ("it was
only sensible to make them as efficient as possible"). Traditional
computer science can shed much light on implementation issues, but has
in practice been of little direct help in the conceptualization phase
(except occasionally by analogy and generalization). All branches of
computer science share basic interests such as how to represent and use
knowledge, but AI differs in the GRAIN SIZE of the knowledge it
considers.  It would be very desirable to have a unified theory of
computer science that provides ideas and tools along the continuum of
knowledge grain size; but we are not quite there, yet. Until that time,
perceiving the different branches of computer science as contributing
useful knowledge to different levels of implementation (e.g. knowledge
level, data level, register transfer level, hardware level) is probably
the best integration our short term memories can handle.

Chris Tong

------------------------------

Date: 28 Nov 83 22:25:35-PST (Mon)
From: pur-ee!uiucdcs!marcel @ Ucb-Vax
Subject: RJ vs AI: Science vs Engineering? - (nf)
Article-I.D.: uiucdcs.4187

In response to Johnson vs AI, and Tom Dietterich's defense:

The emergence of the knowledge-based perspective is only the beginning of
what AI has achieved and is working on. Obvious corollaries: knowledge
acquisition and extraction, representation, inference engines.

Some rather impressive results have been obtained here. One with which I
am most familiar is work being done at Edinburgh by the Machine Intelligence
Research Unit on knowledge extraction via induction from user-supplied
examples (the induction program is commercially available). A paper by
Shapiro (Alen) & Niblett in Computer Chess 3 describes the beginnings of the
work at MIRU. Shapiro has only this month finished his PhD, which effectively
demonstrates that human experts, with the aid of such induction programs,
can produce knowledge bases that surpass the capabilities of any expert
as regards their completeness and consistency. Shapiro synthesized a
totally correct knowledge base for part of the King-and-Pawn against
King-and-Rook chess endgame, and even that relatively small endgame
was so complex that, though it was treated in the chess literature, the
descriptions provided by human experts consisted largely of gaps. Impressively,
3 chess novices managed (again with the induction program) to achieve 99%
correctness in this normally difficult problem.

        The issue: even novices are better at articulating knowledge
        by means of examples than experts are at articulating the actual
        rules involved, *provided* that the induction program can represent
        its induced rules in a form intelligible to humans.

The long-term goal and motivation for this work is the humanization of
technology, namely the construction of systems that not only possess expert
competence, but are capable of communicating their reasoning to humans.
And we had better get this right, lest we get stuck with machines that run our
nuclear plants in ways that are perhaps super-smart but incomprehensible ...
until a crisis happens, when suddenly the humans need to understand what the
machine has been doing until now.

The problem: lack of understanding of human cognitive psychology. More
specifically, how are human concepts (even for these relatively easy
classification tasks) organized? What are the boundaries of 'intelligibility'?
Though we are able to build systems that function, in some ways, like a human
expert, we do not know much about what distinguishes brain-computable processes
from general algorithms.

But we are learning. In fact, I am tempted to define this as one criterion
distinguishing knowledge-based AI from other computing: the absolute necessity
of having our programs explain their own processing. This is close to demanding
that they also process in brain-compatible terms. In any case we will need to
know what the limits of our brain-machine are, and in what forms knowledge
is most easily apprehensible to it. This brings our end of AI very close to
cognitive psychology, and threatens to turn knowledge representation into a
hard science -- not just

        What does a system need, to be able to X?

but     How does a human brain produce behavior/inference X, and how do
        we implement that so as preserve maximal man-machine compatibility?

Hence the significance of the work by Shapiro, mentioned above: the
intelligibility of his representations is crucial to the success of his
knowledge-acquisition method, and the whole approach provides some clues on
how a humane knowledge representation might be scientifically determined.

A computer is merely a necessary weapon in this research. If AI has made little
obvious progress it may be because we are too busy trying to produce useful
systems before we know how they should work. In my opinion there is too little
hard science in AI, but that's understandable given its roots in an engineering
discipline (the applications of computers). Artificial intelligence is perhaps
the only "application" of computers in which hard science (discovering how to
describe the world) is possible.

We might do a favor both to ourselves and to psychology if knowledge-based AI
adopted this idea. Of course, that would cut down drastically on the number of
papers published, because we would have some very hard criteria about what
comprised a tangible contribution. Even working programs would not be
inherently interesting, no matter what they achieved or how they achieved it,
unless they contributed to our understanding of knowledge, its organization
and its interpretation. Conversely, working programs would be necessary only
to demonstrate the adequacy of the idea being argued, and it would be possible
to make very solid contributions without a program (as opposed to the flood of
"we are about to write this program" papers in AI).

So what are we: science or engineering? If both, let's at least recognize the
distinction as being valuable, and let's know what yet another expert system
proves beyond its mere existence.

                                        Marcel Schoppers
                                        U of Illinois @ Urbana-Champaign

------------------------------

End of AIList Digest
********************

∂05-Dec-83  0250	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #109    
Received: from SRI-AI by SU-AI with TCP/SMTP; 5 Dec 83  02:49:28 PST
Date: Sun  4 Dec 1983 22:56-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #109
To: AIList@SRI-AI


AIList Digest             Monday, 5 Dec 1983      Volume 1 : Issue 109

Today's Topics:
  Expert Systems & VLSI - Request for Material,
  Programming Languages - Productivity,
  Editorial Policy - Anonymous Messages,
  Bindings - Dr. William A. Woods,
  Intelligence,
  Looping Problem,
  Pattern Recognition - Block Modeling,
  Seminars - Programs as Predicates & Explainable Expert System
----------------------------------------------------------------------

Date: Sun, 4 Dec 83 17:59:53 PST
From: Tulin Mangir <tulin@UCLA-CS>
Subject: Request for Material

      I am preparing a tutorial and a current bibliography, for IEEE,
of the work in the area of expert system applications to CAD and computer aided
testing as well as computer aided processing. Specific emphasis is
on LSI/VLSI design, testing and processing. I would like this
material to be as complete and as current as we can all make. So, if you
have any material in these areas that you would like me to include
in the notes, ideas about representation of structure, knowledge,
behaviour of digital circuits, etc., references you know of,
please send me a msg. Thanks.

Tulin Mangir <cs.tulin@UCLA-cs>
(213) 825-2692
      825-4943 (secretary)

------------------------------

Date: 29 Nov 83 22:25:19-PST (Tue)
From: sri-unix!decvax!duke!mcnc!marcel@uiucdcs.UUCP (marcel )@CCA
Subject: Re: lisp productivity question - (nf)
Article-I.D.: uiucdcs.4197

And now a plug from the logic programming people: try prolog for easy
debugging. Though it may take a while to get used to its modus operandi,
it has one advantage that is shared by no other language I know of:
rule-based computing with a clean formalism. Not to mention the ease
of implementing concepts such as "for all X satisfying P(X) do ...".
The end of cumbersome array traversals and difficult boolean conditions!
Well, almost. Not to mention free pattern matching. And I wager that
the programs will be even shorter in Prolog, primarily because of these
considerations. I have written 100-line Prolog programs which were
previously coded as Pascal programs of 2000 lines.

Sorry, I just couldn't resist the chance to be obnoxious.

------------------------------

Date: Fri, 2 Dec 83 09:47 EST
From: MJackson.Wbst@PARC-MAXC.ARPA
Subject: Lisp "productivity"

"A caveat: Lisp is very well suited to the nature of game programs.
A fair test would require that data processing and numerical analysis
problems be included in the mix of test problems."

A fair test of what?  A fair test of which language yields the greatest
productivity when applied to the particular mix of test problems, I
would think.  Clearly (deepfelt theological convictions to the contrary)
there is NO MOST-PRODUCTIVE LANGUAGE.  It depends on the problem set; I
like structured languages so I do my scientific programming in Ratfor,
and when I had to do it in Pascal it was awful, but for a different type
of problem Pascal would be just fine.

Mark

------------------------------

Date: 30 Nov 83 22:49:51-PST (Wed)
From: pur-ee!uiucdcs!uicsl!Anonymous @ Ucb-Vax
Subject: Lisp Productivity & Anonymous Messages

  Article-I.D.: uiucdcs.4245

  The most incredible programming environment I have worked with to date is
  that of InterLisp.  The graphics-based trace and break packages on Xerox's
  InterLisp-D (not to mention the Lisp editor, file package, and the
  programmer's assistant) is, to say the least, addictive.  Ease of debugging
  has been combined with power to yield an environment in which program
  development/debugging is easy, fast and productive.  I think other languages
  have a long way to go before someone develops comparable environments for
  them.  Of course, part of this is due to the language (i.e., Lisp) itself,
  since programs written in Lisp tend to be easy to conceptualize and write,
  short, and readable.

[I will pass this message along to the Arpanet AIList readers,
but am bothered by its anonymous authorship.  This is hardly an
incriminating message, and I see no reason for the author to hide.
I do not currently reject anonymous messages out of hand, but I
will certainly screen them strictly.  -- KIL]

------------------------------

Date: Thu 1 Dec 83 07:37:04-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Press Release RE: Dr. William A. Woods

                [Reprinted from the SU-SCORE bboard.]

As of September 16, Chief Scientist directing all research in AI and related
technologies for Applied Expert Systems, Inc., Five Cambridge Center,
Cambridge, Mass 02142  (617)492-7322  net address Woods@BBND (same as before)
HL

------------------------------

Date: Fri, 2 Dec 83 09:57:14 PST
From: Adolfo Di-Mare <v.dimare@UCLA-LOCUS>
Subject: a new definition of intelligence

You're intelligence is directly proportional to the time it takes
you to bounce back after you're replaced by an <intelligent> computer.

As I'm not an economist, I won't argue on how intelligent we are...
Put in another way, is an expert that builds a machine that substitutes
him/er intelligent? If s/he is not, is the machine?

        Adolfo
              ///

------------------------------

Date: 1 Dec 83 20:37:31-PST (Thu)
From: decvax!bbncca!jsol @ Ucb-Vax
Subject: Re: Halting Problem Discussion
Article-I.D.: bbncca.365

Can a method be formulated for deciding whether or not your are on the right
track? Yes. It's call interaction. Ask someone you feel you can trust about
whether or not you are getting anywhere, and to offer any advice to help you
get where you want to go.

Students do it all the time, they come to their teachers and ask them to
help them. Looping programs could decide that they have looped for as long
as they care to and reality check them. An algorithm to do this is available
if anyone wants it (read that to mean I will produce one).
--
[--JSol--]

JSol@Usc-Eclc/JSol@Bbncca (Arpa)
JSol@Usc-Eclb/JSol@Bnl (Milnet)
{decvax, wjh12, linus}!bbncca!jsol

------------------------------

From: Bibbero.PMSDMKT
Reply-to: Bibbero.PMSDMKT
Subject: Big Brother and Block Modeling, Warning

               [Reprinted from the Human-Nets Digest.]

  [This application of pattern recognition seems to warrant mention,
  but comments on the desirability of such analysis should be directed
  to Human-Nets@RUTGERS. -- KIL]

The New York Times (Nov 20, Sunday Business Section) carries a warning
from two Yale professors against a new management technique that can
be misused to snoop on personnel through sophisticted mathematical
analysis of communications, including computer network usage.
Professors Scott Boorman, a Yale sociologist, and Paul Levitt,
research mathematician at Yale and Harvard (economics) who authored
the article also invented the technique some years ago.  Briefly, it
consists of computer-intensive analysis of personnel communications to
divide them into groups or "blocks" depending on who they communicate
with, whom they copy on messages, who they phone and who's calls don't
they return.  Blocks of people so identified can be classified as
dissidents, potential traitors or "Young Turks" about to split off
their own company, company loyalists, promotion candidates and so
forth.  "Guilt by association" is built into the system since members
of the same block may not even know each other but merely copy the
same person on memos.

The existence of an informal organization as a powerful directing
force in corporations, over and above the formal organization chart,
has been recognized for a long time.  The block analysis method
permits and "x-ray" penetration of these informal organizations
through use of computer on-line analysis which may act, per the
authors, as "judge and jury."  The increasing usage of electronic
mail, voice storage and forward systems, local networks and the like
make clandestine automation of this kind of snooping simple, powerful,
and almost inevitable.  The authors cite as misusage evidence the high
degree of interest in the method by iron curtain government agencies.
An early success (late 60's) was also demonstrated in a Catholic
monastery where it averted organizational collapse by identifying
members as loyalists, "Young Turks," and outcasts.  Currently,
interest is high in U.S. corporations, particularily the internal
audit departments seeking to identify dissidents.

As the authors warn, this revolution in computers and information
systems bring us closer to George Orwell's state of Oceania.

------------------------------

Date: 1 Dec 1983 1629-EST
From: ELIZA at MIT-XX
Subject: Seminar Announcement

                 [Reprinted from the MIT-AI bboard.]


Date:  Wednesday, December 7th, l983

Time:  Refreshments 3:30 P.M.
       Seminar      3:45 P.M.

Place: NE43-512A (545 Technology Square, Cambridge)


                    PROGRAMS ARE PREDICATES
                          C. A. R. Hoare
                        Oxford University

    A program is identified with the strongest predicate
    which describes every observation that might be made
    of a mechanism which executes the program.  A programming
    language is a set of programs expressed in a limited
    notation, which ensures that they are implementable
    with adequate efficiency, and that they enjoy desirable
    algebraic properties.  A specification S is a predicate
    expressed in arbitrary mathematical notation.  A program
    P meets this specification if

                            P ==> S .

    Thus a calculus for the derivation of correct programs
    is an immediate corollary of the definition of the
    language.

    These theses are illustrated in the design of two simple
    programming languages, one for sequential programming and
    the other for communicating sequential processes.

Host:  Professor John V. Guttag

------------------------------

Date: 12/02/83 09:17:19
From: ROSIE at MIT-ML
Subject: Expert Systems Seminar

                             [Forwarded by SASW@MIT-MC.]

                          DATE:    Thursday, December 8, 1983
                          TIME:    2.15 p.m.  Refreshments
                                   2.30 p.m.  Lecture
                          PLACE:   NE43-AI Playroom


                          Explainable Expert Systems

                                Bill Swartout
                      USC/Information Sciences Institute


Traditional methods for explaining programs provide explanations by converting
the code of the program to English.  While such methods can sometimes
adequately explain program behavior, they cannot justify it.  That is, such
systems cannot tell why what the system is doing is reasonable.  The problem
is that the knowledge required to provide these justifications was used to
produce the program but is itself not recorded as part of the code and hence
is unavailable.  This talk will first describe the XPLAIN system, a previous
research effort aimed at improving the explanatory capabilities of expert
systems.  We will then outline the goals and research directions for the
Explainable Expert Systems project, a new research effort just starting up at
ISI.

The XPLAIN system uses an automatic programmer to generate a consulting
program by refinement from abstract goals.  The automatic programmer uses two
sources of knowledge: a domain model, representing descriptive facts about the
application domain, and a set of domain principles, representing
problem-solving knowledge, to drive the refinement process forward.  As XPLAIN
creates an expert system, it records the decisions it makes in a refinement
structure.  This structure is then used to provide explanations and
justifications of the expert system.

Our current research focuses on three areas.  First, we want to extend the
XPLAIN framework to represent additional kinds of knowledge such as control
knowledge for efficient execution.  Second, we want to investigate the
compilation process that moves from abstract to specific knowledge.  While it
does seem that human experts compile their knowledge, they do not always use
the resulting specific methods.  This may be because the specific methods
often contain compiled-in assumptions which are usually (but not always)
correct.  Third, we intend to use the richer framework provided by XPLAIN for
enhanced knowledge acquisition.

HOST:  Professor Peter Szolovits

------------------------------

End of AIList Digest
********************

∂07-Dec-83  0058	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #110    
Received: from SRI-AI by SU-AI with TCP/SMTP; 7 Dec 83  00:57:12 PST
Date: Tue  6 Dec 1983 20:24-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #110
To: AIList@SRI-AI


AIList Digest           Wednesday, 7 Dec 1983     Volume 1 : Issue 110

Today's Topics:
  AI and Manufacturing - Request,
  Bindings - HPP,
  Programming Languages - Environments & Productivity,
  Vision - Cultural Influences on Perception,
  AI Jargon - Mental States of Machines,
  AI Challange & Expert Systems,
  Seminar - Universal Subgoaling
----------------------------------------------------------------------

Date: 5 Dec 83 15:14:26 EST  (Mon)
From: Dana S. Nau <dsn%umcp-cs@CSNet-Relay>
Subject: AI and Automated Manufacturing

I and some colleagues at University of Maryland are doing a literature
search on the use of AI techniques in Automated Manufacturing.
The results of the literature search will comprise a report to be
sent to the National Bureau of Standards as part of a research
contract.  We'd appreciate any relevant information any of you may
have--especially copies of papers or technical reports.  In
return, I can send you (on request) copies of some papers I have
published on that subject, as well as a copy of the literature
search when it is completed.  My mailing address is

                Dana S. Nau
                Computer Science Dept.
                University of Maryland
                College Park, MD 20742

------------------------------

Date: Mon 5 Dec 83 08:27:28-PST
From: HPP Secretary <HPP-SECRETARY@SUMEX-AIM.ARPA>
Subject: New Address for HPP

  [Reprinted from the SU-SCORE bboard.]

The HPP has moved.  Our new address is:

    Heuristic Programming Project
    Computer Science Department
    Stanford University
    701 Welch Road, Bldg. C
    Palo Alto, CA 94304

------------------------------

Date: Mon, 5 Dec 83 09:43:51 PST
From: Seth Goldman <seth@UCLA-CS>
Subject: Programming environments are fine, but...

What are all of you doing with your nifty, adequate, and/or brain-damaged
computing environments?  Also, if we're going to discuss environments, it
would be more productive I think to give concrete examples of the form:

        I was trying to do or solve X
        Here is how my environment helped me OR
        This is what I need and don't yet have

It would also be nice to see some issues of AIList dedicated to presenting
1 or 2 paragraph abstracts of current work being pursued by readers and
contributors to this list.  How about it Ken?

        [Sounds good to me.  It would be interesting to know
        whether progress in AI is currentlyheld back by conceptual
        problems or just by the programming effort of building
        large and user-friendly systems.  -- KIL]

Seth Goldman

------------------------------

Date: Monday, 5 December 1983 13:47:13 EST
From: Robert.Frederking@CMU-CS-CAD
Subject: Re: marcel on "lisp productivity question"

        I just thought I should mention that production system languages
share all the desirable features of Prolog mentioned in the previous
message, particularly being "rule-based computing with a clean formalism".
The main differences with the OPS family of languages is that OPS uses
primarily forward inference, instead of backwards inference, and a slightly
different matching mechanism.  Preferring one over the other depends, I
suspect, on whether you think in terms of proofs or derivations.

------------------------------

Date: Mon, 5 Dec 83 10:23:17 pst
From: evans@Nosc (Evan C. Evans)
Subject: Vision & Such

Ken Laws in AIList Digest 1:99 states:  an  adequate  answer [to
the question of why computers can't see yet] requires a guess
at how it is that the human vision system can work in all cases.
I cannot answer Ken's question, but perhaps I  can provide some
useful input.

        language shapes culture    (Sapir-Whorf hypothesis)
        culture  shapes vision     (see following)
        vision   shapes language   (a priori)

The influence of culture on perception (vision) takes many forms.
A  statistical examination (unpublished) of the British newspaper
game "Where's the ball?" is worth consideration.  This  game  has
been appearing for some time in British, Australian, New Zealand,
& Fijian papers.  So far as I know, it has not yet made  its  ap-
pearance in U.S. papers.  The game is played thus:
        A photograph of some common sport  involving  a  ball  is
published  with  the ball erased from the picture & the question,
where's the ball?  Various members  of  the  readership  send  in
their guesses & that closest to the ball's actual position in the
unmodified photo wins.  Some time back the responses  to  several
rounds of this game were subjected to statistical analysis.  This
analysis showed that there were statistically  valid  differences
associated  with  the  cultural  background  of the participants.
This finding was particularly striking in Fiji  with  a  resident
population  comprising  several  very  different cultural groups.
Ball placement by the different groups tended to cluster at  sig-
nificantly  different  locations  in the picture, even for a game
like soccer that was well known & played by all.   It  is  unfor-
tunate that this work (not mine) has not been published.  It does
suggest two things: a.) a cultural influence on vision &  percep-
tion,  &  b.) a powerful means of conducting experiments to learn
more about this influence.  For instance, this same research  was
elaborated  into  various  TV displays designed to discover where
children of various age groups placed an unseen object  to  which
an  arrow  pointed.   The  children responded enthusiastically to
this new TV game, giving their answers by means of a  light  pen.
Yet  statistically significant amounts of data were collected ef-
ficiently & painlessly.
        I've constructed the loop above to suggest that  none  of
the  three:  vision, language, & culture should be studied out of
context.

E. C. Evans III

------------------------------

Date: Sat 3 Dec 83 00:42:50-PST
From: PEREIRA@SRI-AI.ARPA
Subject: Mental states of machines

Steven Gutfreund's criticism of John McCarthy is unjustified.  I
haven't read the article in "Psychology Today", but I am familiar with
the notion put forward by JMC and condemned by SG.  The question can
be put in simple terms: is it useful to attribute mental states and
attitudes to machines? The answer is that our terms for mental states
and attitudes ("believe", "desire", "expect", etc...) represent a
classification of possible relationships between world states and the
internal (inacessible) states of designated individuals. Now, for
simple individuals and worlds, for example small finite automata, it
is possible to classify the world-individual relationships with simple
and tractable predicates. For more complicated systems, however, the
language of mental states is likely to become essential, because the
classifications it provides may well be computationally tractable in
ways that other classifications are not. Remember that individuals of
any "intelligence" must have states that encode classifications of
their own states and those of other individuals. Computational
representations of the language of mental states seem to be the only
means we have to construct machines with such rich sets of states that
can operate in "rational" ways with respect to the world and other
individuals.

SG's comment is analogous to the following criticism of our use of the
terms like "execution", "wait" or "active" when talking about the
states of computers: "it is wrong to use such terms when we all know
that what is down there is just a finite state machine, which we
understand so well mathematically."

Fernando Pereira

------------------------------

Date: Mon 5 Dec 83 11:21:56-PST
From: Wilkins  <WILKINS@SRI-AI.ARPA>
Subject: complexity of formal systems

  From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
  They then resort to arcane languages and to attributing 'mental'
  characteristics to what are basically fuzzy algorithms that have been applied
  to poorly formalized or poorly characterized problems.  Once the problems are
  better understood and are given a more precise formal characterization, one
  no longer needs "AI" techniques.

I think Professor McCarthy is thinking of systems (possibly not built yet)
whose complexity comes from size and not from imprecise formalization.  A
huge AI program has lots of knowledge, all of it may be precisely formalized
in first-order logic or some other well understood formalism, this knowledge
may be combined and used by well understood and precise inference algorithms,
and yet because of the (for practical purposes) infinite number of inputs and
possible combinations of the individual knowledge formulas, the easiest
(best? only?) way to desribe the behavior of the system is by attributing
mental characteristics.  Some AI systems approaching this complex already
exist.  This has nothing to do with "fuzzy algorithms" or "poorly formalized
problems", it is just the inherent complexity of the system.  If you think
you can usefully explain the practical behavior of any well-formalized system
without using mental characteristics, I submit that you haven't tried it on a
large enough system (e.g. some systems today need a larger address space than
that available on a DEC 2060 -- combining that much knowledge can produce
quite complex behavior).

------------------------------

Date: 28 Nov 83 3:10:20-PST (Mon)
From: harpo!floyd!clyde!akgua!sb1!sb6!bpa!burdvax!sjuvax!rbanerji@Ucb-
      Vax
Subject: Re: Clarifying my "AI Challange"
Article-I.D.: sjuvax.157

        [...]
        I am reacting to Johnson, Helly and Dietterich.  I really liked
[Ken Laws'] technical evaluation of Knowledge-based programming. Basically
similar to what Tom also said in defense of Knowledge-based programming
but KIL said it much clearer.
        On one aspect, I have to agree with Johnson about expert systems
and hackery, though. The only place there is any attempt on the part of
an author to explain the structure of the knowledge base(s) is in the
handbook. But I bet that as the structures are changed by later authors
for various justified and unjustified reasons, they will not be clearly
explained except in vague terms.
        I do not accept Dietterich's explanation that AI papers are hard
to read because of terminology; or because what they are trying to do
are so hard. On the latter point, we do not expect that what they are
DOING be easy, just that HOW they are doing it be clearly explained:
and that the definition of clarity follow the lines set out in classical
scientific disciplines. I hope that the days are gone when AI was
considered some sort of superscience answerable to none. On the matter
of terminology, papers (for example) on algebraic topology have more
terminology than AI: terminology developed over a longer period of time.
But if one wants to and has the time, he can go back, back, back along
lines of reference and to textbooks and be assured he will have an answer.
In AI, about the only hope is to talk to the author and unravel his answers
carefully and patiently and hope that somewhere along the line one does not
get "well, there is a hack there..it is kind of long and hard to explain:
let me show you the overall effect"
        In other sciences, hard things are explained on the basis of
previously explained things. These explanantion trees are much deeper
than in AI; they are so strong and precise that climbing them may
be hard, but never hopeless.
        I agree with Helly in that this lack is due to the fact that no
attempt has been made in AI to have workers start with a common basis in
science, or even in scientific methodology. It has suffered in the past
because of this. When existing methods of data representation and processing
in theorem proving was found inefficient, the AI culture developed this
self image that its needs were ahead of logic: notwithstanding the fact
that the techniques they were using were representable in logic and that
the reason for their seeming success was in the fact that they were designed
to achieve efficiency at the cost (often high) of flexibility. Since
then, those words have been "eaten": but at considerable cost. The reason
may well be that the critics of logic did not know enough logic to see this.
In some cases, their professors did--but never cared to explain what the
real difficulty in logic was. Or maybe they believed their own propaganda.
        This lack of uniformity of background came out clear when Tom said
that because of AI work people now clearly understood the difference between
the subset of a set and the element of a set. This difference has been well
known at least since early this century if not earlier. If workers in AI
did not know it before, it is because of their reluctance to know the meaning
of a term before they use it. This has also often come from their belief
that precise definitions will rob their terms of their richness (not realising
that once they have interpreted their terms by a program, they have a precise
definition, only written in a much less comprehensible way: set theorists
never had any difficulty understanding the diffeence between subsets and
elements). If they were trained, they would know the techniques that are
used in Science for defining terms.
        I disagree with Helly that Computer Science in general is unscientific.
There has always been a precise mathematical basis of Theorem proving (AI,
actually) and in computation and complexity theory. It is true, however, that
the traditional techniques of experimental research have not been used in
AI at all: people have tried hard to use it in software, but seem to
be having difficulties.
        Would Helly disagree with me if I say that Newell and Simon's work
in computer modelling of psychological processes have been carried out
with at least the amount of scientific discipline that psychologists use?
I have always seen that work as one of the success stories in AI.  And
at least some psychologists seem to agree.

        I agree with Tom that AI will have to keep going even if someone
proves that P=NP. The reason is that many AI problems are amenable to
N↑2 methods already: except that N is too big. In this connection I have
a question, in case someone can tell me. I think Rabin has a theorem
that given any system of logic and any computable function, there is
a true statement which takes longer to prove than that function predicts.
What does this say about the relation between P and NP, if anything?
        Too long already!

                                ..allegra!astrovax!sjuvax!rbanerji

------------------------------

Date: 1 Dec 83 13:51:36-PST (Thu)
From: decvax!duke!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Expert Systems
Article-I.D.: ncsu.2420

Are expert systems new? Different?  Well, how about an example.  Time
was, to run a computer system, one needed at least one operator to care
and feed for the system.  This is increasingly handled by sophisticated
operating systems.  As such is an operating system an "expert system"?

An OS is usually developed using a style of programming which is quite
different from those of wimpy, unskilled, un-enlightenned applications
programmers.  It would be very hard to build an operating system in the
applications style.  (I claim).  The people who developed the style and
practice it to build systems are not usually AI people although I would
wager the presonality profiles would be quite similar.

Now, that is I think a major point.  Are there different type of people in
Physics as compared to Biology?  I would say so, having seen some of each.
Further, biologists do research in ways that seem different (again, this is
purely idiosynchratic evidence) differently than physists.  Is it that one
group know how to do science better, or are the fields just so differnt,
or are the people attracted to each just different?

Now, suppose a team of people got together and built an expert system which
was fully capable of taking over the control of a very sophisticated
(previously manual, by highly trained people) inventory, billing and
ordering system.  I claim that this is at least as complex as diagnosis
of and dosing of particular drugs (e.g. mycin).  My expert system
was likely written in Cobol by people doing things in quite different ways
from AI or systems hackers.

One might want to argue that the productivity was much lower, that the
result was harder to change and so on.  I would prefer to see this in
Figures, on proper comparisons.  I suspect that the complexity of the
commercial software I mentioned is MUCH greater than the usual problem
attacked by AI people, so that the "productivity" might be comparable,
with the extra time reflecting the complexity.  For example, designing
the reports and generating them for a large complex system (and doing
a good job)  may take a large fraction of the total time, yet such
reporting is not usually done in the AI world.  Traces of decisions
and other discourse are not the same.  The latter is easier I think, or
at least it takes less work.

What I'm getting at is that expert systems have been around for a long
time, its only that recently AI people have gotten in to the arena. There
are other techniques which have been applied to developing these, and
I am waiting to be convinced that the AI people have a priori superior
strategies.  I would like to be so convinced and I expect someday to
be convinced, but then again, I probably also fit the AI personality
profile so I am rather biased.
----GaryFostel----

------------------------------

Date: 5 Dec 1983 11:11:52-EST
From: John.Laird at CMU-CS-ZOG
Subject: Thesis Defense

                 [Reprinted from the CMU-AI bboard.]

Come see my thesis defense: Wednesday, December 7 at 3:30pm in 5409 Wean Hall

                        UNIVERSAL SUBGOALING

                             ABSTRACT

A major aim of Artificial Intelligence (AI) is to create systems that
display general problem solving ability.  When problem solving, knowledge is
used to avoid uncertainty over what to do next, or to handle the
difficulties that arises when uncertainity can not be avoided.  Uncertainty
is handled in AI problem solvers through the use of methods and subgoals;
where a method specifies the behavior for avoiding uncertainity in pursuit
of a goal, and a subgoal allows the system to recover from a difficulty once
it arises.  A general problem solver should be able to respond to every task
with appropriate methods to avoid uncertainty, and when difficulties do
arise, the problem solver should be able to recover by using an appropriate
subgoal.  However, current AI problem solver are limited in their generality
because they depend on sets of fixed methods and subgoals.

In previous work, we investigated the weak methods and proposed that a
problem solver does not explicitly select a method for goal, with the
inherent risk of selecting an inappropriate method.  Instead, the problem
solver is organized so that the appropriate weak method emerges during
problem solving from its knowledge of the task.  We called this organization
a universal weak method and we demonstrated it within an architecture,
called SOAR.  However, we were limited to subgoal-free weak methods.

The purpose of this thesis is to a develop a problem solver where subgoals
arise whenever the problem solver encounters a difficulty in performing the
functions of problem solving.  We call this capability universal subgoaling.
In this talk, I will describe and demonstrate an implementation of universal
subgoaling within SOAR2, a production system based on search in a problem
space.  Since SOAR2 includes both universal subgoaling and a universal weak
method, it is not limited by a fixed set of subgoals or methods.  We provide
two demonstrations of this: (1) SOAR2 creates subgoals whenever difficulties
arise during problem solving, (2) SOAR2 extends the set of weak methods that
emerge from the structure of a task without explicit selection.

------------------------------

End of AIList Digest
********************

∂10-Dec-83  1902	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #111    
Received: from SRI-AI by SU-AI with TCP/SMTP; 10 Dec 83  19:01:54 PST
Date: Sat 10 Dec 1983 14:46-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #111
To: AIList@SRI-AI


AIList Digest           Saturday, 10 Dec 1983     Volume 1 : Issue 111

Today's Topics:
  Call for Papers - Special Issue of AJCL,
  Linguistics - Phrasal Analysis Paper,
  Intelligence - Purpose of Definition,
  Expert Systems - Complexity,
  Environments - Need for Sharable Software,
  Jargon - Mental States,
  Administrivia - Spinoff Suggestion,
  Knowledge Representation - Request for Discussion
----------------------------------------------------------------------

Date: Thu 8 Dec 83 08:55:34-PST
From: Ray Perrault <RPERRAULT@SRI-AI.ARPA>
Subject: Special Issue of AJCL

         American Journal of Computational Linguistics

The American Journal of Computational Linguistics is planning a
special issue devoted to the Mathematical Properties of Linguistic
Theories.  Papers are hereby requested on the generative capacity of
various syntactic formalisms as well as the computational complexity
of their related recognition and parsing algorithms.  Articles on the
significance (and the conditions for the significance) of such results
are also welcome.  All papers will be subjected to the normal
refereeing process and must be accepted by the Editor-in-Chief, James
Allen.  In order to allow for publication in Fall 1984, five copies of
each paper should be sent by March 31, 1984 to the special issue
editor,

C. Raymond Perrault                Arpanet: Rperrault@sri-ai
SRI International                  Telephone: (415) 859-6470
EK268
Menlo Park, CA 94025.

Indication of intention to submit would also be appreciated.

------------------------------

Date: 8 Dec 1983 1347-PST
From: MEYERS.UCI-20A@Rand-Relay
Subject: phrasal analysis paper


Over a month ago, I announced that I'd be submitting
a paper on phrasal analysis to COLING.  I apologize
to all those who asked for a copy for not getting it
to them yet.  COLING acceptance date is April 2,
so this may be the earliest date at which I'll be releasing
papers.  Please do not lose heart!

Some preview of the material might interest AILIST readers:

The paper is entitled "Conceptual Grammar", and discusses
a grammar that uses syntactic and 'semantic' nonterminals.
Very specific and very general information about language
can be represented in the grammar rules.  The grammar is
organized into explicit levels of abstraction.
The emphasis of the work is pragmatic, but I believe it
represents a new and useful approach to Linguistics as
well.

Conceptual Grammar can be viewed as a systematization of the
knowledge base of systems such as PHRAN (Wilensky and Arens,
at UC Berkeley).  Another motivation for a conceptual grammar is
the lack of progress in language understanding using syntax-based
approaches.  A third motivation is the lack of intuitive appeal
of existing grammars -- existing grammars offer no help in manipulating
concepts the way humans might.  Conceptual Grammar is
an 'open' grammar at all levels of abstraction.  It is meant
to handle special cases, exceptions to general rules, idioms, etc.

Papers on the implemented system, called VOX, will follow
in the near future.  VOX analyzes messages in the Navy domain.
(However, the approach to English is completely general).

If anyone is interested, I can elaborate, though it is
hard to discuss such work in this forum.  Requests
for papers (and for abstracts of UCI AI Project papers)
can be sent by computer mail, or 'snail-mail' to:

        Amnon Meyers
        AI Project
        Department of Computer Science
        University of California
        Irvine, CA  92717

PS: A paper has already been sent to CSCSI.  The papers emphasize
    different aspects of Conceptual Grammar.  A paper on VOX as
    an implementation of Conceptual Grammar is planned for AAAI.

------------------------------

Date: 2 Dec 83 7:57:46-PST (Fri)
From: ihnp4!houxm!hou2g!stekas @ Ucb-Vax
Subject: Re: Rational Psych (and science)
Article-I.D.: hou2g.121

It is true that psychology is not a "science" in the way a physicist
defines "science". Of course, a physicist would be likely to bend
his definition of "science" to exclude psychology.

The situation is very much the same as defining "intelligence".
Social "scientists" keep tightening their definition of intelligence
as required to exclude anything which isn't a human being.  While
AI people now argue over what intelligence is, when an artificial system
is built with the mental ability of a mouse (the biological variety!)
in no time all definitions of intelligence will be bent to include it.

The real significance of a definition is that it clarifies the *direction*
in which things are headed.  Defining "intelligence" in terms of
adaptability and self-consciousness are evidence of a healthy direction
to AI.

                                               Jim

------------------------------

Date: Fri 9 Dec 83 16:08:53-PST
From: Peter Karp <KARP@SUMEX-AIM.ARPA>
Subject: Biologists, physicists, and report generating programs

I'd like to ask Mr. Fostel how biologists "do research in ways that seem
different than physicists".  It would be pretty exciting to find that
one or both of these two groups do science in a way that is not part of
standard scientific method.

He also makes the following claim:

   ... the complexity of the commercial software I mentionned is
   MUCH greater than the usual problem attacked by AI people...

With the example that:

   ... designing the reports and generating them for a large complex
   system (and doing a good job) may take a large fraction of the total
   time, yet such reporting is not usually done in the AI world.

This claim is rather absurd.  While I will not claim that deciding on
the best way to present a large amount of data is a trivial task, the
point is that report generating programs have no knowledge about data
presentation strategies.  People who do have such knowledge spend hours
and hours deciding on a good scheme and then HARD CODING such a scheme
into a program.  Surely one would not claim that a program consisting
soley of a set of WRITELN (or insert your favorite output keyword)
statements has any complexity at all, much less intelligence or
knowledge?  Just because a program takes a long time to write doesn't
mean it has any complexity, in terms of control structures or data
structures.  And in fact this example is a perfect proof of this
conjecture.

------------------------------

Date: 2 Dec 83 15:27:43-PST (Fri)
From: sri-unix!hplabs!hpda!fortune!amd70!decwrl!decvax!duke!mcnc!shebs
      @utah-cs.UUCP (Stanley Shebs)
Subject: Re: RE: Expert Systems
Article-I.D.: utah-cs.2279

A large data-processing application is not an expert system because
it cannot explain its action, nor is the knowledge represented in an
adequate fashion.  A "true" expert system would *not* consist of
algorithms as such.  It would consist of facts and heuristics organized
in a fashion to permit some (relatively uninteresting) algorithmic
interpreter to generate interesting and useful behavior. Production
systems are a good example.  The interpreter is fixed - it just selects
rules and fires them.  The expert system itself is a collection of rules,
each of which represents a small piece of knowledge about the domain.
This is of course an idealization - many "expert systems" have a large
procedural component.  Sometimes the existence of that component can
even be justified...

                                                stan shebs
                                                utah-cs!shebs

------------------------------

Date: Wed, 7 Dec 1983  05:39 EST
From: LEVITT%MIT-OZ@MIT-MC.ARPA
Subject: What makes AI crawl

    From: Seth Goldman <seth@UCLA-CS>
    Subject: Programming environments are fine, but...

    What are all of you doing with your nifty, adequate, and/or brain-damaged
    computing environments?  Also, if we're going to discuss environments, it
    would be more productive I think to give concrete examples...
            [Sounds good to me.  It would be interesting to know
            whether progress in AI is currently held back by conceptual
            problems or just by the programming effort of building
            large and user-friendly systems.  -- KIL]

It's clear to me that, despite a relative paucity of new "conceptual"
AI ideas, AI is being held back entirely by the latter "programming
effort" problem, AND by the failure of senior AI researchers to
recognize this and address it directly.  The problem is regressive
since programming problems are SO hard, the senior faculty typically
give up programming altogether and lose touch with the problems.

Nobody seems to realize how close we would be to practical AI, if just
a handful of the important systems of the past were maintained and
extended, and if the most powerful techniques were routinely applied
to new applications - if an engineered system with an ongoing,
expanding knowledge base were developed.  Students looking for theses
and "turf" are reluctant to engineer anything familiar-looking.  But
there's every indication that the proven techniques of the 60's/early
70's could become the core of a very smart system with lots of
overlapping knowledge in very different subjects, opening up much more
interesting research areas - IF the whole thing didn't have to be
(re)programmed from scratch.  AI is easy now, showing clear signs of
diminishing returns, CS/software engineering are hard.

I have been developing systems for the kinds of analogy problems music
improvisors and listeners solve when they use "common sense"
descriptions of what they do/hear, and of learning by ear.  I have
needed basic automatic constraint satisfaction systems
(Sutherland'63), extensions for dependency-directed backtracking
(Sussman'77), and example comparison/extension algorithms
(Winston'71), to name a few.  I had to implement everything myself.
When I arrived at MIT AI there were at least 3 OTHER AI STUDENTS
working on similar constraint propagator/backtrackers, each sweating
out his version for a thesis critical path, resulting in a draft
system too poorly engineered and documented for any of the other
students to use.  It was idiotic.  In a sense we wasted most of our
programming time, and would have been better off ruminating about
unfamiliar theories like some of the faculty.  Theories are easy (for
me, anyway).  Software engineering is hard.  If each of the 3 ancient
discoveries above was an available module, AI researchers could have
theories AND working programs, a fine show.

------------------------------

Date: Thu, 8 Dec 83 11:56 EST
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
Subject: re: mental states of machines

I have no problem with using anthropomorphic (or "mental") descriptions of
systems as a heuristic for dealing with difficult problems. One such
trick I especially approve of is Seymour Papert's "body syntonicity"
technique. The basic idea is to get young children to understand the
interaction of mathematical concepts by getting them to enter into a
turtle world and become an active participant in it, and to use this
perspective for understanding the construction of geometric structures.

What I am objecting to is that I sense that John McCarthy is implying
something more in his article: that human mental states are no different
than the very complex systems that we sometimes use mental descriptions
as a shorthand to describe.

I would refer to Ilya Prigogine's 1976 Nobel Prize winning work in chemistry on
"Dissapative Structures" to illustrate the foolishness of McCarthy's
claim.

Dissapative structures can be explained to some extent to non-chemists by means
of the termite analogy. Termites construct large rich and complex domiciles.
These structures sometimes are six feet tall and are filled with complex
arches and domed structures (it took human architects many thousands of
years to come up with these concepts). Yet if one watches termites at
the lowest "mechanistic" level (one termite at a time), all one sees
is a termite randomly placing drops of sticky wood pulp in random spots.

What Prigogine noted was that there are parallels in chemistry. Where random
underlying processes spontaneously give rise to complex and rich ordered
structures at higher levels.

If I accept McCarthy's argument that complex systems based on finite state
automata exhibit mental characteristics, then I must also hold that termite
colonies have mental characteristics, Douglas Hofstadter's Aunt Hillary also
has mental characteristics, and that certain colloidal suspensions and
amorphous crystals have mental characteristics.

                                                - Steven Gutfreund
                                                  Gutfreund.umass@csnet-relay

  [I, for one, have no difficulty with assigning mental "characteristics"
  to inanimate systems.  If a computer can be "intelligent", and thus
  presumably have mental characteristics, why not other artificial
  systems?  I admit that this is Humpty-Dumpty semantics, but the
  important point to me is the overall I/O behavior of the system.
  If that behavior depends on a set of (discrete or continuous) internal
  states, I am just as happy calling them "mental" states as calling
  them anything else.  To reserve the term mental for beings having
  volition, or souls, or intelligence, or neurons, or any other
  intuitive characteristic seems just as arbitrary to me.  I presume
  that "mental" is intended to contrast with "physical", but I side with
  those seeing a physical basis to all mental phenomena.  Philosophers
  worry over the distinction, but all that matters to me is the
  behavior of the system when I interface with it.  -- KIL]

------------------------------

Date: 5 Dec 83 12:08:31-PST (Mon)
From: harpo!eagle!mhuxl!mhuxm!pyuxi!pyuxnn!pyuxmm!cbdkc1!cbosgd!osu-db
      s!lum @ Ucb-Vax
Subject: Re: defining AI, AI research methodology, jargon in AI
Article-I.D.: osu-dbs.426

Perhaps Dyer is right.  Perhaps it would be a good thing to split net.ai/AIList
into two groups, net.ai and net.ai.d, ala net.jokes and net.jokes.d.  In one
the AI researchers could discuss actual AI problems, and in the other, philo-
sophers could discuss the social ramifications of AI, etc.  Take your pick.

Lum Johnson (cbosgd!osu-dbs!lum)

------------------------------

Date: 7 Dec 83 8:27:08-PST (Wed)
From: decvax!tektronix!tekcad!franka @ Ucb-Vax
Subject: New Topic (technical) - (nf)
Article-I.D.: tekcad.155

        OK, some of you have expressed a dislike for "non-technical, philo-
sophical, etc." discussions on this newsgroup. So for those of you who are
tired of this, I pose a technical question for you to talk about:

        What is your favorite method of representing knowlege in a KBS?
Do you depend on frames, atoms of data jumbled together randomly, or something
in between? Do you have any packages (for public consumption which run on
machines that most of us have access to) that aid people in setting up knowlege
bases?

        I think that this should keep this newsgroup talking at least partially
technically for a while. No need to thank me. I just view it as a public ser-
vice.

                                        From the truly menacing,
   /- -\                                but usually underestimated,
    <->                                 Frank Adrian
                                        (tektronix!tekcad!franka)

------------------------------

End of AIList Digest
********************

DLO - Sorry, a couple of days ago I deleted the second message from this
page and moved it to another file, forgetting that this was not my own
mail file.  The message is now restored and appears on this page starting
at line 52.  I have changed my CKSUM entry for this file to put me in /R
mode, so it shouldn't happen again!

∂14-Dec-83  1459	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #112    
Received: from SRI-AI by SU-AI with TCP/SMTP; 14 Dec 83  14:56:09 PST
Delivery-Notice: While sending this message to SU-AI.ARPA, the
 SRI-AI.ARPA mailer was obliged to send this message in 50-byte
 individually Pushed segments because normal TCP stream transmission
 timed out.  This probably indicates a problem with the receiving TCP
 or SMTP server.  See your site's software support if you have any questions.
Date: Wed 14 Dec 1983 10:03-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #112
To: AIList@SRI-AI


AIList Digest           Wednesday, 14 Dec 1983    Volume 1 : Issue 112

Today's Topics:
  Memorial Fund - Carl Engelman,
  Programming Languages - Lisp Productivity,
  Expert Systems - System Size,
  Scientific Method - Information Sciences,
  Jargon - Mental States,
  Perception - Culture and Vision,
  Natural Language - Flame
----------------------------------------------------------------------

Date: Fri 9 Dec 83 12:58:53-PST
From: Don Walker <WALKER@SRI-AI.ARPA>
Subject: Carl Engelman Memorial Fund

                      CARL ENGELMAN MEMORIAL FUND

        Carl Engelman, one of the pioneers in artificial intelligence
research, died of a heart attack at his home in Cambridge, Massachusetts,
on November 26, 1983.  He was the creator of MATHLAB, a program developed
in the 1960s for the symbolic manipulation of mathematical expressions.
His objective there was to supply the scientist with an interactive
computational aid of a "more intimate and liberating nature" than anything
available before. Many of the ideas generated in the development of MATHLAB
have influenced the architecture of other systems for symbolic and algebraic
manipulation.

        Carl graduated from the City College of New York and then earned
an MS Degree in Mathematics at the Massachusetts Institute of Technology.
During most of his professional career, he worked at The MITRE Corporation
in Bedford, Massachusetts.  In 1973 he was on leave as a visiting professor
at the Institute of Information Science of the University of Turin, under a
grant from the Italian National Research Council.

        At the time of his death Carl was an Associate Department Head
at MITRE, responsible for a number of research projects in artificial
intelligence.  His best known recent work was KNOBS, a knowledge-based
system for interactive planning that was one of the first expert systems
applied productively to military problems.  Originally developed for the
Air Force, KNOBS was then adapted for a Navy system and is currently being
used in two NASA applications.  Other activities under his direction
included research on natural language understanding and automatic
programming.

        Carl published a number of papers in journals and books and gave
presentations at many conferences.  But he also illuminated every meeting
he attended with his incisive analysis and his keen wit.  While he will
be remembered for his contributions to artificial intelligence, those
who knew him personally will deeply miss his warmth and humor, which he
generously shared with so many of us.  Carl was particularly helpful to
people who had professional problems or faced career choices; his paternal
support, personal sponsorship, and private intervention made significant
differences for many of his colleagues.

        Carl was a member of the American Association for Artificial
Intelligence, the American Institute of Aeronautics and Astronautics, the
American Mathematical Society, the Association for Computational
Linguistics, and the Association for Computing Machinery and its Special
Interest Group on Artificial Intelligence.

        Contributions to the "Carl Engelman Memorial Fund" should be
sent to Judy Clapp at The MITRE Corporation, Bedford, Massachusetts 01730.
A decision will be made later on how those funds will be used.

------------------------------

Date: Tue, 13 Dec 83 09:49 PST
From: Kandt.pasa@PARC-MAXC.ARPA
Subject: re: lisp productivity question

Jonathan Slocum (University of Texas at Austin) has a large natural
language translation program (thousands of lines of Interlisp) that was
originally in Fortran.  The compression that he got was 16.7:1.  Also, I
once wrote a primitive production rule system in both Pascal and
Maclisp.  The Pascal version was over 2000 lines of code and the Lisp
version was about 200 or so.  The Pascal version also was not as
powerful as the Lisp version because of Pascal's strong data typing and
dynamic allocation scheme.

-- Kirk

------------------------------

Date: 9 Dec 83 19:30:46-PST (Fri)
From: decvax!cca!ima!inmet!bhyde @ Ucb-Vax
Subject: Re: RE: Expert Systems - (nf)
Article-I.D.: inmet.578

I would like to add to Gary's comments.  There are also issues of
scale to be considered.  Many of the systems built outside of AI
are orders of magnitude larger.  I was amazed to read that at one
point the largest OPS production system, a computer game called Haunt,
had so very few rules in it.  A compiler written using a rule based
approach would have 100 times as many rules.  How big are the
AI systems that folks actually build?

The engineering component of large systems obscures the architectural
issues involved in their construction.  I have heard it said that
AI isn't a field, it is a stage of the problem solving process.

It seems telling that the ARPA 5-year speech recognition project
was successful not with Hearsay ( I gather that after it was too late
it did manage to met the performance requirements ), but by Harpy.  Now,
Harpy as very much like a signal processing program.  The "beam search"
mechanisms it used are very different than the popular approachs of
the AI comunity.  In the end it seems that it was an act of engineering,
little insight into the nature of knowledge gained.

The issues that caused AI and the rest of computing to split a few
decades ago seem almost quaint now.  Allan Newell has a pleasing paper
about these.  Only the importance of an interpreter based program
development enviroment seem to continue.  Can you buy a work station
capable of sharing files with your 360 yet?

[...]
                                ben hyde

------------------------------

Date: 10 Dec 83 16:33:59-PST (Sat)
From: decvax!ittvax!dcdwest!sdcsvax!davidson @ Ucb-Vax
Subject: Information sciences vs. physical sciences
Article-I.D.: sdcsvax.84

I am responding to an article claiming that psychology and computer
science aren't sciences.  I think that the author is seriously confused
by his prefered usage of the term ``science''.  The sciences based on
mathematics, information processing, etc., which I will here call
information sciences, e.g., linguistics, computer science, information
science, cognitive science, psychology, operations research, etc., have
very different methods of operation from sciences based upon, for
example, physics.  Since people often view physics as the prototypical
science, they become confused when they look at information sciences.
This is analogous to the confusion of the early grammarians who tried
to understand English from a background in Latin:  They decided that
English was primitive and in need of fixing, and proceeded to create
Grammar schools in which we were all supposed to learn how to speak
our native language properly (i.e., with intrusions of latin grammar).

If someone wants to have a private definition of the word science to
include only some methods of operation, that's their privilege, as
long as they don't want to try to use words to communicate with other
human beings.  But we shouldn't waste too much time definining terms,
when we could be exploring the nature and utility of the methodologies
used in the various disciplines.  In that light, let me say something
about the methodologies of two of the disciplines as I understand and
practice them, respectively.

Physics:  There is here the assumption of a simple underlying reality,
which we want to discover through elegant theorizing and experimenting.
Compared to other disciplines, e.g., experimental psychology, many of
the experimental tools are crude, e.g., the statistics used.  A theoretical
psychologist would probably find the distance that often separates physical
theory from experiment to be enormous.  This is perfectly alright, given
the (assumed) simple nature of underlying reality.

Computer Science:  Although in any mathematically based science one
might say that one is discovering knowledge; in many ways, it makes
better sense in computer science to say that one is creating as much
as discovering.  Someone will invent a new language, a new architecture,
or a new algorithm, and people will abandon older languages, architectures
and algorithms.  A physicist would find this strange, because these objects
are no less valid for having been surpassed (the way an outdated physical
theory would be), but are simply no longer interesting.

Let me stop here, and solicit some input from people involved in other
disciplines.  What are your methods of investigation?  Are you interested
in creating theories about reality, or creating artificial or abstract
realities?  What is your basis for calling your discipline a science,
or do you?  Please do not waste any time saying that some other discipline
is not a science because it doesn't do things the way yours does!

-Greg

------------------------------

Date: Sun, 11 Dec 83 20:43 EST
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
Subject: re: mental states

Ken Laws in his little editorializing comment on my last note seems to
have completely missed the point. Whether FSA's can display mental
states is an argument I leave to others on this list. However, John
McCarthy's definition allows ant hills and colloidal suspensions to
have mental states.

------------------------------

Date: Sun, 11 Dec 1983 15:04:10 EST
From: AXLER.Upenn-1100@Rand-Relay (David M. Axler - MSCF Applications
      Mgr.)
Subject: Culture and Vision

    Several people have recently been bringing up the question of the
effects of culture on visual perception.  This problem has been around
in anthropology, folkloristics, and (to some extent) in sociolinguistics
for a number of years.  I've personally taken a number of graduate courses
that focussed on this very topic.
     Individuals interested in this problem (or, more precisely, group of
problems) should look into the Society for the Anthropology of Visual
Communication (SAVICOM) and its journal.  You'll find that the terminology
is often unfamiliar, but the concerns are similar.  The society is based
at the University of Pennsylvania's Annenberg School of Communications,
and is formally linked with such relevant groups as the American Anthro-
pological Assn.
     Folks who want more info, citations, etc. on this can also contact
me personally by netmail, as I'm not sure that this is sufficiently
relevant to take up too much of AI's space.
     Dave Axler
     (Axler.Upenn-1100@Rand-Relay)


[Extract from further correspondence with Dave:]

     There is a thing called "Visual Anthropology", on the
other hand, which deals with the ways that visual tools such as film, video,
still photography, etc., can be used by the anthropologist.  The SAVICOM
journal occasionally has articles dealing with the "meta" aspects of visual
anthropology, causing it, at such times, to be dealing with the anthropology
of visual anthropology (or, at least, the epistemology thereof...)

                                     --Dave Axler

------------------------------

Date: Mon 12 Dec 83 21:16:43-PST
From: Martin Giles <MADAGIL@SU-SIERRA.ARPA>
Subject: A humanities view of computers and natural language

The following is a copy of an article on the Stanford Campus report,
7th December, 1983, in response to an article describing research at
Stanford.  The University has just received a $21 million grant for
research in the fields of natural and computer languages.

                                Martin

[I have extracted a few relevant paragraphs from the following 13K-char
flame.  Anyone wanting the full text can contact AIList-Request or FTP
it from <AILIST>COHN.TXT on SRI-AI.  I will deleted it after a few weeks.
-- KIL]


  Mail-From: J.JACKSON1 created at 10-Dec-83 10:29:54
  Date: Sat 10 Dec 83 10:29:54-PST
  From: Charlie Jackson  <J.JACKSON1@LOTS-A>
  Subject: F; (Gunning Fog Index 20.18); Cohn on Computer Language Study
  To: bboard@LOTS-A

  Following is a letter found in this week's Campus Report that proves
  Humanities profs make as good flames as any CS hacker.  Charlie

        THE NATURE OF LANGUAGE IS ALREADY KNOWN WITHOUT COMPUTERS

  Following is a response from Robert Greer Cohn, professor of French, to
the Nov. 30 Campus Report article on the study of computer and natural
language.

        The ambitious program to investigate the nature of language in
connection with computers raises some far-reaching questions.  If it is
to be a sort of Manhattan project, to outdo the Japanese in developing
machines that "think" and "communicate" in a sophisticated way, that is
one thing, and one may question how far a university should turn itself
towards such practical, essentially engineering, matters.  If on the
other hand, they are serious about delving into the  nature of languages
for the sake of disinterested truth, that is another pair of shoes.
        Concerning the latter direction: no committee ever instituted
has made the kind of breakthrough individual genius alone can
accomplish. [...]
        Do they want to know the nature of language?  It is already
known.
        The great breakthrough cam with Stephane Mallarme, who as Edmund
Wilson (and later Hugh Kenner) observed, was comparable only to Einstein
for revolutionary impact.  He is responsible more than anyone, even
Nietzsche, for the 20th-century /episteme/, as most French first-rank
intellectuals agree (for example, Foucault, in "Les mots et les choses";
Sartre, in his preface to the "Poesies"' Roland Barthes who said in his
"Interview with Stephen Hearth," "All we do is repeat Mallarme";
Jakobson; Derrida; countless others).
        In his "Notes" Mallarme saw the essence of language as
"fiction," which is to say it is based on paradox.  In the terms of
Darwin, who describes it as "half art, half instinct," this means that
language, as related to all other reality (hypothetically nonlinguistic,
experimental) is "metaphorical" -- as we now say after Jakobson -- i.e.
above and below the horizontal line of on-going, spontaneous,
comparatively undammmed, life-flow or experience; later, as the medium
of whatever level of creativity, it bears this relation to the
conventional and rational real, sanity, sobriety, and so on.
        In this sense Chomsky's view of language as innate and
determined is a half-truth and not very inspired.  He would have been
better off if he had read and pondered, for example, Pascal, who three
centuries ago knew that "nature is itself only a first 'custom'"; or
Shakespeare: "The art itself is nature" (The Winter's Tale).
        [...]

        But we can't go into all the aspects of language here.
        In terms of the project:  since, on balance, it is unlikely the
effects will go the way of elite French thought on the subject, there
remains the probability that they will try to recast language, which is
at its best creatively free (as well as determined at its best by
organic totality, which gives it its ultimate meaning, coherence,
harmony), into the narrow mold of the computer, even at /its/ best.
        [...]

        COMPUTERS AND NEWSPEAK

        In other words, there is no way to make a machine speak anything
other than newspeak, the language of /1984/.  They may overcome that
flat dead robotic tone that our children enjoy -- by contrast, it gives
them the feeling that they are in command of life -- but the thought and
the style will be sprirtually inert.  In that sense, the machines, or
the new language theories, will reflect their makers, who, in harnessing
themselves to a prefabricated goal, a program backed by a mental arms
race, will have been coopted and dehumanized.  That flat (inner or
outer) tone is a direct result of cleaving to one-dimensionality, to the
dimension of the linear and "metonymic," the dimension of objectivity,
of technology and science, uninformed and uninspired by the creatively
free and whole-reflecting ("naive") vertical, or vibrant life itself.
        That unidimensionality is visible in the immature personalities
of the zealots who push these programs:  they are not much beyond
children in their Frankenstein eagerness to command the frightening
forces of the psyche, including sexuality, but more profoundly, life
itself, in its "existential" plenitude involving death.
        People like that have their uses and can, with exemplary "tunnel
vision," get certain jobs done (like boring tunnels through miles of
rock).  A group of them can come up with /engineering/ breakthroughs in
that sense, as in the case of the Manhattan project.  But even that
follows the /creative/ breakthroughs of the Oppenheimers and Tellers and
Robert D. (the shepherd in France) and is rather pedestrian endeavor
under the management of some colonel.
        When I tried to engage a leader of the project in discussion
about the nature of language, he refused, saying, "The humanities and
sciences are father apart than ever," clearly welcoming this
development.  This is not only deplorable in itself; far worse,
according to the most accomplished mind on /their/ side of the fence in
this area; this man's widely-hailed thinking is doomed to a dead end,
because of its "unidimensionality!"
        This is not the place to go into the whole saddening bent of
our times and the connection with totalitarianism, which is "integrated
systems" with a vengeance.  But I doubt that this is what our founders
had in mind.

------------------------------

End of AIList Digest
********************

∂16-Dec-83  1327	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #113    
Received: from SRI-AI by SU-AI with TCP/SMTP; 16 Dec 83  13:27:18 PST
Date: Fri 16 Dec 1983 10:02-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #113
To: AIList@SRI-AI


AIList Digest            Friday, 16 Dec 1983      Volume 1 : Issue 113

Today's Topics:
  Alert - Temporal Representation & Fuzzy Reasoning
  Programming Languages - Phrasal Analysis Paper,
  Fifth Generation - Japanese and U.S. Views,
  Seminars - Design Verification & Fault Diagnosis
----------------------------------------------------------------------

Date: Wed 14 Dec 83 11:21:47-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: CACM Alert - Temporal Representation & Fuzzy Reasoning

Two articles in the Nov. issue of CACM (just arrived) may be of
special interest to AI researchers:


"Maintaining Knowledge about Temporal Intervals," by James F. Allen
of the U. of Rochester, is about representation of temporal information
using only intervals -- no points.  While this work does not lead to a
fully general temporal calculus, it goes well beyond state space and
date line systems and is more powerful and efficient than event chaining
representations.  I can imagine that the approach could be generalized
to higher dimensions, e.g., for reasoning about the relationships of
image regions or objects in the 3-D world.


"Extended Boolean Information Retrieval," by Gerald Salton, Edward A. Fox,
and Harry Wu, presents a fuzzy logic or hierarchical inference method for
dealing with uncertainties when evaluating logical formulas.  In a
formula such as ((A and B) or (B and C)), they present evidential
combining formulas that allow for:

  * Uncertainty in the truth, reliability, or applicability of the
    the primitive terms A and B;

  * Differing importance of establishing the primitive term instances
    (where the two B terms above could be weighted differently);

  * Differing semantics of the logical connectives (where the two
    "and" connectives above could be threshold units with different
    thresholds).

The output of their formula evaluator is a numerical score.  They use
this for ranking the pertinence of literature citations to a database
query, but it could also be used for evidential reasoning or for
evaluating possible worlds in a planning system.  For the database
query system, they indicate a method for determining term weights
automatically from an inverted index of the database.

The weighting of the Boolean connectives is based on the infinite set
of Lp vector norms.  The connectives and[INF] and or[INF] are the
ones of standard logic; and[1] and or[1] are equivalent and reduce
formula evaluation to a simple weighted summation; intermediate
connective norms correspond to "mostly" gates or weighted neural
logic models.  The authors present both graphical illustrations and
logical theorems about these connectives.

                                        -- Ken Laws

------------------------------

Date: 14 Dec 83 20:05:25-PST (Wed)
From: hplabs!hpda!fortune!phipps @ Ucb-Vax
Subject: Re: Phrasal Analysis Paper/Programming Languages Applications ?
Article-I.D.: fortune.1981

Am I way off base, or does this look as if the VOX project
would be of interest to programming languages (PL) researchers ?
It might be interesting to submit to the next
"Principles of Programming Languages" (POPL) conference, too.

As people turn from traditional programming languages
(is Ada really pointing the way of the future ? <shudder !>) to other ways
(query languages and outright natural language processing)
to obtain and manipulate information and codified knowledge,
I believe that AI and PL people will find more overlap in their ends,
although probably not their respective interests, approaches, and style.
This institutionalized mutual ignorance doesn't benefit either field.
One of these days, AI people and programming languages people
ought to heal their schism.

I'd certainly like to hear more of VOX, and would cheerfully accept delivery
of a copy of your paper (US Mail (mine): PO Box 2284, Santa Clara CA 95055).
My apologies for using the net for a reply, but he's unreachable
thru USENET, and I wanted to make a general point anyhow.

-- Clay Phipps

--
   {allegra,amd70,cbosgd,dsd,floyd,harpo,hollywood,hpda,ihnp4,
    magic,megatest,nsc,oliveb,sri-unix,twg,varian,VisiA,wdl1}
   !fortune!phipps

------------------------------

Date: 12 Dec 83 15:29:10 PST (Monday)
From: Ron Newman <Newman.es@PARC-MAXC.ARPA>
Subject: New Generation computing: Japanese and U.S. views

  [The following is a direct submission to AIList, not a reprint.
  It has also appeared on the Stanford bboards, and has generated
  considerable discussion there.  I am distributing this and the
  following two reprints because they raise legitimate questions
  about the research funding channels available to AI workers.  My
  distribution of these particular messages should not be taken as
  evidence of support for or against military research. -- KIL]

from Japan:

  "It is necessary for each researcher in the New Generation Computer
technology field to work for world prosperity and the progress of
mankind.

  "I think it is the responsibility of each researcher, engineer and
scientist in this field to ensure that KIPS [Knowledge Information
Processing System] is used for good, not harmful, purposes.  It is also
necessary to investigate KIPS's influence on society concurrent with
KIPS's development."

  --Tohru Moto-Oka, University of Tokyo, editor of the new journal "New
Generation Computing", in the journal's founding statement (Vol. 1, No.
1, 1983, p. 2)



and from the U.S.:

  "If the new generation technology evolves as we now expect, there will
be unique new opportunities for military applications of computing.  For
example, instead of fielding simple guided missiles or remotely piloted
vehicles, we might launch completely autonomous land, sea, and air
vehicles capable of complex, far-ranging reconnaissance and attack
misssions.  The possibilities are quite startling, and suggest that new
generation computing could fundamentally change the nature of future
conflicts."

  --Defense Advanced Research Projects Agency, "Strategic Computing:
New Generation Computing Technology: A Strategic Plan for its
Development and Application to Critical Problems in Defense,"  28
October 1983, p. 1

------------------------------

Date: 13 Dec 83 18:18:23 PST (Tuesday)
From: Ron Newman <Newman.es@PARC-MAXC.ARPA>
Subject: Re: New Generation computing: Japanese and U.S. views

                [Reprinted from the SU-SCORE bboard.]

My juxtaposition of quotations is intended to demonstrate the difference
in priorities between the Japanese and U.S. "next generation" computer
research programs.  Moto-Oka is a prime mover behind the Japanese
program, and DARPA's Robert Kahn is a prime mover behind the American
one.  Thus I consider the quotations comparable.

To put it bluntly:  the Japanese say they are developing this technology
to help solve human and social problems.  The Americans say they are
developing this technology to find more efficient ways of killing
people.

The difference in intent is quite striking, and will undoubtedly produce
a "next-generation" repetition of an all too familiar syndrome.  While
the U.S. pours yet more money and scientific talent into the military
sinkhole, the Japanese invest their monetary and human capital in
projects that will produce profitable industrial products.

Here are a couple more comparable quotes, both from IEEE Spectrum, Vol.
20, No. 11, November 1983:

  "DARPA intends to apply the computers developed in this program to a
number of broad military applications...
  "An example might be a pilot's assistant that can respond to spoken
commands by a pilot and carry them out without error, drawing upon
specific aircraft, sensor, and tactical knowledge stored in memory and
upon prodigious computer power.  Such capability could free a pilot to
concentrate on tactics while the computer automatically activated
surveillance sensors, interpreted radar, optical, and electronic
intelligence, and prepared appropriate weapons systems to counter
hostile aircraft or missiles....
  "Such systems may also help in military assessments on a battlefield,
simulating and predicting the consequences of various courses of
military action and interpreting signals acquired on the battlefield.
This information could be compiled and presented as sophisticated
graphics that would allow a commander and his staff to concentrate on
the larger strategic issues, rather than having to manage the enormous
data flow that will[!] characterize future battles."
    --Robert S. Cooper and Robert E. Kahn, DARPA, page 53.

  "Fifth generation computers systems are exptected to fulfill four
major roles:  (1) enhancement of productivity in low-productivity areas,
such as nonstandardized operations in smaller industries;  (2)
conservation of national resources and energy through optimal energy
conversion; (3) establishment of medical, educational, and other kinds
of support systems for solving complex social problems, such as the
transition to a society made up largely of the elderly;  and (4)
fostering of international cooperation through the machine translation
of languages."
    --Tohru Moto-Oka, University of Tokyo, page 46


Which end result would *you* rather see?

/Ron

------------------------------

Date: Tue 13 Dec 83 21:29:22-PST
From: John B. Nagle <NAGLE@SU-SCORE.ARPA>
Subject: Comparable quotes

                [Reprinted from the SU-SCORE bboard.]

     The goals of an effort funded by the military will be different
than those of an effort aimed at trade dominance.  Intel stayed out of
the DoD VHSIC program because the founder of Intel felt that concentrating
on fast, expensive circuits would be bad for business.  He was right.
The VHSIC program is aimed at making a few hundred copies of an IC for
a few thousand each.  Concentration on that kind of product will bankrupt
a semiconductor company.
     We see the same thing in AI.  There is getting to be a mini-industry
built around big expensive AI systems on big expensive computers.  Nobody
is thinking of volume.  This is a direct consequence of the funding source.
People think in terms of keeping the grants coming in, not selling a
million copies.  If money came from something like MITI, there would be
pressure to push forward to a volume product just to find out if there
is real potential for the technology in the real world.  Then there would
be thousands of people thinking about the problems in the field, not
just a few hundred.
     This is divirging from the main thrust of the previous flame, but
think about this and reply.  There is more here than another stab at the
big bad military.

------------------------------

Date: Tue 13 Dec 83 10:40:04-PST
From: Sumit Ghosh <GHOSH@SU-SIERRA.ARPA>
Subject: Ph.D. Oral Examination: Special Seminar

             [Reprinted from the SU-SCORE bboard.]


   ADA Techniques for Implementing a Rule-Based Generalised Design Verifier

                     Speaker: Sumit Ghosh

                    Ph.D. Oral Examination
             Monday, 19th Dec '83. 3:30pm. AEL 109


This thesis describes a top-down, rule-based design verifier implemented in
the language ADA. During verification of a system design, a designer needs
several different kinds of simulation tools such as functional simulation,
timing verification, fault simulation etc. Often these tools are implemented
in different languages, different machines thereby making it difficult to
correlate results from different kinds of simulations. Also the system design
must be described in each of the different kinds of simulation, implying a
substantial overhead. The rule-based approach enables one to create different
kinds of simulations, within the same simulation environment, by linking
appropriate type of models with the system nucleus. This system also features
zooming whereby certain subsections of the system design (described at a high
level) can be expanded at a lower level, at run time, for a more detailed
simulation. The expansion process is recursive and should be extended down to
the circuit level. At the present implementation stage, zooming is extended to
gate level simulation. Since only those modules that show discrepancy (or
require detailed analysis) during simulation are simulated in details, the
zoom technique implies a substantial reduction in complexity and CPU time.
This thesis further contributes towards a functional deductive fault simulator
and a generalised timing verifier.

------------------------------

Date: Mon 12 Dec 83 12:46-EST
From: Philip E. Agre <AGRE%MIT-OZ@MIT-MC.ARPA>
Subject: Walter Hamscher at the AI Revolving Seminar

                 [Reprinted from the MIT-AI bboard.]

AI Revolving Seminar
Walter Hamscher

Diagnostic reasoning for digital devices with static storage elements

Wendesday 14 December 83 4PM
545 Tech Sq 8th floor playroom


We view diagnosis as a process of reasoning from anomalous observations to a
set of components whose failure could explain the observed misbehaviors.  We
call these components "candidates."  Diagnosing a misbehaving piece of
hardware can be viewed as a process of generating, discriminating among, and
refining these candidates.  We wish to perform this diagnosis by using an
explicit representation of the hardware's structure and function.

Our candidate generation methodology is based on the notions of dependency
directed backtracking and local propagation of constraints.  This
methodology works well for devices without storage elements such as
flipflops.  This talk presents a representation for the temporal behavior of
digital devices which allows devices with storage elements to be treated
much the same as combinatorial devices for the purpose of candidate
generation.

However, the straightforward adaptation requires solutions to subproblems
that are severely underconstrained.  This in turn leads to an overly
conservative and not terribly useful candidate generator.  There exist
mechanism-oriented solutions such as value enumeration, propagation of
variables, and slices; we review these and then demonstrate what domain
knowledge can be used to motivate appropriate uses of those techniques.
Beyond this use of domain knowledge within the current representation, there
are alternative perspectives on the problem which offer some promise of
alleviating the lack of constraint.

------------------------------

End of AIList Digest
********************

∂18-Dec-83  1526	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #114    
Received: from SRI-AI by SU-AI with TCP/SMTP; 18 Dec 83  15:23:50 PST
Date: Sun 18 Dec 1983 11:48-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #114
To: AIList@SRI-AI


AIList Digest            Sunday, 18 Dec 1983      Volume 1 : Issue 114

Today's Topics:
  Intelligence - Confounding with Culture,
  Jargon - Mental States,
  Scientific Method - Research Methodology
----------------------------------------------------------------------

Date: 13 Dec 83 10:34:03-PST (Tue)
From: hplabs!hpda!fortune!amd70!dual!onyx!bob @ Ucb-Vax
Subject: Re: Intelligence = culture
Article-I.D.: onyx.112

I'm surprised that there have been no references  to  culture  in
all of these "what is intelligence?" debates...

The simple fact of the matter is, that "intelligence" means  very
little  outside  of  any specific cultural reference point.  I am
not referring just to culturally-biased vs. non-culturally-biased
IQ tests, although that's a starting point.

Consider someone raised from infancy in the jungle  (by  monkeys,
for  the  sake  of the argument). What signs of intelligence will
this person show?  Don't expect them  to  invent  fire  or  stone
axes;  look  how  long it took us the first time around. The most
intelligent thing that person could do would be on par with  what
we  see chimpanzees doing in the wild today (e.g. using sticks to
get ants, etc).

What I'm driving at is that there  are  two  kinds  of  "intelli-
gence"; there is "common sense and ingenuity" (monkeys, dolphins,
and a few people), and there is  "cultural  methodology"  (people
only).

Cultural methodologies include  all  of  those  things  that  are
passed  on  to  us  as a "world-view", for instance the notion of
wearing clothes, making fire, using arithmetic to figure out  how
many  people  X  bags of grain will feed, what spices to use when
cooking, how to talk (!), all of these things were at one time  a
brilliant  conception  in  someones' mind. And it didn't catch on
the first time around. Probably not  the  second  or  third  time
either.  But eventually someone convinced other people to try his
idea, and it became part of that culture. And  using  that  as  a
context  gives  other  people  an  opportunity  to bootstrap even
further. One small step for a man, a giant leap for his culture.

When we think about intelligence and get impressed by how wonder-
ful  it  is, we are looking at its application in a world stuffed
to the gills with prior context that is indispensible  to  every-
thing we think about.

What this leaves us with is people trying to define and measure a
hybrid  of  common  sense  and culture without noticing that what
they are interested in is actually two different things, plus the
inter-relations  between  those  things,  so  no wonder the issue
seems so murky.

For those who may be interested, general systems theory,  general
semantics,  and  epistemology  are  some fascinating related sub-
jects.

Now let's see some letters about what "common sense" is  in  this
context,  and about applying that common sense to (cultural) con-
texts. (How recursive!)

------------------------------

Date: Tue, 13 Dec 83 11:24 EST
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
Subject: re: mental states

I am very intriguied by Ferenando Pereira's last comment:

    Sorry, you missed the point that JMC and then I were making. Prygogine's
    work (which I know relatively well) has nothing to say about systems
    which have to model in their internal states equivalence classes of
    states of OTHER systems. It seems to me impossible to describe such
    systems unless certain sets of states are labeled with things
    like "believe(John,have(I,book))". That is, we start associating
    classes of internal states to terms that include mentalistic
    predicates.

I may be missing the point, since I am not sure what "model in their internal
states equivelence classes of states of OTHER systems" means. But I think
you are saying is that `reasoning systems' that encode in their state
information about the states of other systems (or their own) are not
coverered by Ilya Prygogine's work.

I think think you are engaging in a leap of faith here. What is the basis
for believing that any sort of encoding of the state of other systems is
going on here. I don't think even the philosophical guard phrase
`equivalence class' protects you in this case.

To continue in my role of sceptic: if you make claims that you are constructing
systems that model their internal state (or other systems' internal states)
[or even an equivalence class of those states]. I will make a claim that
my Linear Programming Model of an computer parts inventory is also
exhibiting `mental reasoning' since it is modeling the internal states
of that computer parts inventory.

This means that Prygogine's work is operative in the case of FSA based
`reasoning systems' since they can do no more modeling of the internal
state of another system than a colloidal suspension, or an inventory
control system built by an operations research person.


                                - Steven Gutfreund
                                  Gutfreund.umass@csnet-relay

------------------------------

Date: Wed 14 Dec 83 17:46:06-PST
From: PEREIRA@SRI-AI.ARPA
Subject: Mental states of machines

The only reason I have to believe that a system encodes in its states
classifications of the states of other systems is that the systems we
are talking about are ARTIFICIAL, and therefore this is part of our
design. Of course, you are free to say that down at the bottom our
system is just a finite-state machine, but that's about as helpful as
making the same statement about the computer on which I am typing this
message when discussing how to change its time-sharing resource
allocation algorithm.

Besides this issue of convenience, it may well be the case that
certain predicates on the states of other or the same system are
simply not representable within the system. One does not even need to
go as far as incompleteness results in logic: in a system which has
means to represent a single transitive relation (say, the immediate
accessibility relation for a maze), no logical combination can
represent the transitive closure (accessibility relation) [example due
to Bob Moore]. Yet the transitive closure is causally connected to the
initial relation in the sense that any change in the latter will lead
to a change in the former. It may well be the case (SPECULATION
WARNING!) that some of the "mental state" predicates have this
character, that is, they cannot be represented as predicates over
lower-level notions such as states.

-- Fernando Pereira

------------------------------

Date: 12 Dec 83 7:20:10-PST (Mon)
From: hplabs!hao!seismo!philabs!linus!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: Mental states of machines
Article-I.D.: dciem.548

Any discussion of the nature and value of mental states in either
humans of machines should include consideration of the ideas of
J.G.Taylor (no relation). In his "Behavioral Basis of Perception"
Yale University Press, 1962, he sets out mathematically a basis
for changes in perception/behaviour dependent on transitions into
different members of "sets" of states. These "sets" look very like
the mental states referenced in the earlier discussion, and may
be tractable in studies of machine behaviour. They also tie in
quite closely with the recent loose talk about "catastrophes" in
psychology, although they are much better specified than the analogists'
models. The book is not easy reading, but it is very worthwhile, and
I think the ideas still have a lot to offer, even after 20 years.

Incidentally, in view of the mathematical nature of the book, it
is interesting that Taylor was a clinical psychologist interested
initially in behaviour modification.

Martin Taylor
{allegra,linus,ihnp4,uw-beaver,floyd,ubc-vision}!utzoo!dciem!mmt

------------------------------

Date: 14 Dec 1983 1042-PST
From: HALL.UCI-20B@Rand-Relay
Subject: AI Methods

After listening in on the communications concerning definitions
of intelligence, AI methods, AI results, AI jargon, etc., I'd
like to suggest an alternate perspective on these issues.  Rather
than quibbling over how AI "should be done," why not take a close
look at how things have been and are being done?  This is more of
a social-historical viewpoint, admitting the possibility that
adherents of differing methodological orientations might well
"talk past each other" - hence the energetic argumentation over
issues of definition.  In this spirit, I'd like to submit the
following for interested AILIST readers:

         Toward a Taxonomy of Methodological
    Perspectives in Artificial Intelligence Research

                 Rogers P. Hall
               Dennis F. Kibler

                   TR  108
                September 1983

      Department of Information and Computer Science
         University of California, Irvine
             Irvine, CA   92717

                    Abstract

    This paper is an attempt to explain the apparent confusion of
efforts in the field of artificial intelligence (AI) research in
terms of differences between underlying methodological perspectives
held by practicing researchers.  A review of such perspectives
discussed in the existing literature will be presented, followed by
consideration of what a relatively specific and usable taxonomy of
differing research perspectives in AI might include.  An argument
will be developed that researchers should make their methodological
orientations explicit when communicating research results, both as
an aid to comprehensibility for other practicing researchers and as
a step toward providing a coherent intellectual structure which can
be more easily assimilated by newcomers to the field.

The full report is available from UCI for a postage fee of $1.30.
Electronic communications are welcome:

    HALL@UCI-20B
    KIBLER@UCI-20B

------------------------------

Date: 15 Dec 1983 9:02-PST
From: fc%usc-cse%USC-ECL@MARYLAND
Subject: Re: AIList Digest   V1 #112 - science

        In my mind, science has always been the practice of using the
'scientific method' to learn. In any discipline, this is used to some
extent, but in a pure science it is used in its purest form. This
method seems to be founded in the following principles:

1       The observation of the world through experiments.

2       Attempted explanations in terms of testable hypotheses - they
        must explain all known data, predict as yet unobserved results,
        and be falsifiable.

3       The design and use of experiments to test predictions made by these
        hypotheses in an attempt to falsify them.

4       The abandonment of falsified hypotheses and their replacement
        with more accurate ones - GOTO 2.

        Experimental psychology is indeed a science if viewed from this
perspective. So long as hypotheses are made and predictions tested with
some sort of experiment, the crudity of the statistics is similar to
the statistical models of physics used before it was advanced to its
current state. Computer science (or whatever you call it) is also a
science in the sense that our understanding of computers is based on
prediction and experimentation. Anyone that says you don't experiment
with a computer hasn't tried it.

        The big question is whether mathematics is a science. I guess
it is, but somehow any system in which you only falsify or verify based
on the assumptions you made leaves me a bit concerned. Of course we are
context bound in any other science, and can't often see the forest for
the trees, but on the other hand, accidental discovery based on
experiments with results which are unpredictable under the current theory
is not really possible in a purely mathematical system.

        History is probably not a science in the above sense because,
although there are hypotheses with possible falsification, there is
little chance of performing an experiment in the past. Archeological
findings may be thought of as an experiment of the past, but I think
this sort of experiment is of quite a different nature than those that
are performed in other areas I call science. Archeology by the way is
probably a science in the sense of my definition not because of the
ability to test hypotheses about the past through experimental
diggings, but because of its constant development and experimental
testing of theory in regards to the way nature changes things over time.
The ability to determine the type of wood burned in an ancient fire and
the year in which it was burned is based on the scientific process that
archeologists use.

                        Fred

------------------------------

Date: 13 Dec 83 15:13:26-PST (Tue)
From: hplabs!hao!seismo!philabs!linus!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: Information sciences vs. physical sciences
Article-I.D.: dciem.553

*** This response is routed to net.philosophy as well as the net.ai
where it came from. Responders might prefer to edit net.ai out of
the Newsgroups: line before posting.


    I am responding to an article claiming that psychology and computer
    science arn't sciences.  I think that the author is seriously confused
    by his prefered usage of the term ``science''.


I'm not sure, but I think the article referenced was mine. In any case,
it seems reasonable to clarify what I mean by "science", since I think
it is a reasonably common meaning. By the way, I do agree with most of
the article that started with this comment, that it is futile to
define words like "science" in a hard and fast fashion. All I want
here is to show where my original comment comes from.

"Science" has obviously a wide variety of meanings if you get too
careful about it, just as does almost any word in a natural language.
But most meanings of science carry some flavour of a method for
discovering something that was not known by a method that others can
repeat. It doesn't really matter whether that method is empirical,
theoretical, experimental, hypothetico-deductive, or whatever, provided
that the result was previously uncertain or not obvious, and that at
least some other people can reproduce it.

I argued that psychology wasn't a science mainly on the grounds that
it is very difficult, if not impossible, to reproduce the conditions
of an experiment on most topics that qualify as the central core of
what most people think of as psychology. Only the grossest aspects
can be reproduced, and only the grossest characterization of the
results can be stated in a way that others can verify. Neither do
theoretical approaches to psychology provide good prediction of
observable behaviour, except on a gross scale. For this reason, I
claimed that psychology was not a science.

Please note that in saying this, I intend in no way to downgrade the
work of practicing psychologists who are scientists. Peripheral
aspects, and gross descriptions are susceptible to attack by our
present methods, and I have been using those methods for 25 years
professionally. In a way it is science, but in another way it isn't
psychology. The professional use of the word "psychology" is not that
of general English. If you like to think what you do is science,
that's fine, but remember that the definition IS fuzzy. What matters
more is that you contribute to the world's well-being, rather than
what you call the way you do it.
--

Martin Taylor
{allegra,linus,ihnp4,uw-beaver,floyd,ubc-vision}!utzoo!dciem!mmt

------------------------------

Date: 14 Dec 83 20:01:52-PST (Wed)
From: hplabs!hpda!fortune!rpw3 @ Ucb-Vax
Subject: Re: Information sciences vs. physical sc - (nf)
Article-I.D.: fortune.1978

I have to throw my two bits in:

The essence of science is "prediction". The missing steps in the classic
paradigm of hypothesis-experiment-analysis- presented above is
that "hypothesis" should be read "theory-prediction-"

That is, no matter how well the hypothesis explains the current data, it
can only be tested on data that has NOT YET BEEN TAKEN.

Any sufficiently overdetermined model can account for any given set of data
by tweaking the parameters. The trick is, once calculated, do those parameters
then predict as yet unmeasured data, WITHOUT CHANGING the parameters?
("Predict" means "within an reasonable/acceptable confidence interval
when tested with the appropriate statistical methods".)

Why am I throwing this back into "ai"? Because (for me) the true test
of whether "ai" has/will become a "science" is when it's theories/hypotheses
can successfully predict (c.f. above) the behaviour of existing "natural"
intelligences (whatever you mean by that, man/horse/porpoise/ant/...).

------------------------------

End of AIList Digest
********************

∂21-Dec-83  0613	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #115    
Received: from SRI-AI by SU-AI with TCP/SMTP; 21 Dec 83  06:12:38 PST
Date: Tue 20 Dec 1983 21:48-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #115
To: AIList@SRI-AI


AIList Digest           Wednesday, 21 Dec 1983    Volume 1 : Issue 115

Today's Topics:
  Neurophysics - Left/Right-Brain Citation Request,
  Knowledge Representation,
  Science & Computer Science & Expert Systems,
  Science - Definition,
  AI Funding - New Generation Computing
----------------------------------------------------------------------

Date: 16 Dec 83 13:10:45-PST (Fri)
From: decvax!microsoft!uw-beaver!ubc-visi!majka @ Ucb-Vax
Subject: Left / Right Brain
Article-I.D.: ubc-visi.571

From: Marc Majka <majka@ubc-vision.UUCP>

I have heard endless talk, and read endless numbers of magazine-grade
articles about left / right brain theories.  However, I have not seen a
single reference to any scientific evidence for these theories. In fact,
the only reasonably scientific discussion I heard stated quite the opposite
conclusion about the brain:  That although it is clear that different parts
of the brain are associated with specific functions, there is no logical
(analytic, mathematical, deductive, sequential) / emotional (synthetic,
intuitive, inductive, parallel) pattern in the hemispheres of the brain.

Does anyone on the net have any references to any studies that have been
done concerning this issue?  I would appreciate any directions you could
provide.  Perhaps, to save the load on this newsgroup (since this is not an
AI question), it would be best to mail directly to me.  I would be happy to
post a summary to this group.

Marc Majka - UBC Laboratory for Computational Vision

------------------------------

Date: 15 Dec 83 20:12:46-PST (Thu)
From: decvax!wivax!linus!utzoo!watmath!watdaisy!rggoebel @ Ucb-Vax
Subject: Re: New Topic (technical) - (nf)
Article-I.D.: watdaisy.362

Bob Kowalski has said that the only way to represent knowledge is
using first order logic.   ACM SIGART Newsletter No. 70, February 1980
surveys many of the people in the world actually doing representation
research, and few of them agree with Kowalski.   Is there anyone out
there than can substantiate a claim for actually ``representing'' (what
ever that means) ``knowledge?''   Most of the knowledge representation
schemes I've seen are really deductive information description languages
with quasi-formal extensions.   I don't have a good definition of what
knowledge is...but ask any mathematical logician (or mathematical
philosopher) what they think about calling something like KRL a
knowledge representation language.

Randy Goebel
Logic Programming Group
University of Waterloo
Waterloo, Ontario, CANADA N2L 3G1

------------------------------

Date: 13 Dec 83 8:14:51-PST (Tue)
From: hplabs!hao!seismo!philabs!linus!security!genrad!wjh12!foxvax1!br
      unix!jah @ Ucb-Vax
Subject: Re: RE: Expert Systems
Article-I.D.: brunix.5992

I don't understand what the "size" of a program has to do with anything.
The notion that size is important seems to support the idea that the
word "science" in "computer science" belongs in quote marks.  That is,
that CS is just a bunch of hacks anyhow.
 The theory folks, whom I think most of us would call computer scientists,
write almost no programs.  Yet, I'd say their contribution to CS is
quite important (who analyzed the sorting algorithm you used this morning?)
 At least some parts of AI are still Science (with a capital "S").  We are
exploring issues involving cognition and memory, as well as building the
various programs that we call "expert systems" and the like.  Pople's group,
for example, are examining how it is that expert doctors come to make
diagnoses.  He is interested in the computer application, but also in the
understanding of the underlying process.


 Now, while we're flaming, let me also mention that some AI programs have
been awfully large.  If you are into the "bigger is better" mentality, I
suggest a visit to Yale and a view of some of the language programs there.
How about FRUMP, which in its 1978 version took up three processes each
using over 100K of memory, the source code was several hundred pages, and
it contained word definitions for over 10,000 words.  A little bigger
than Haunt??

  Pardon all this verbiage, but I think AI has shown itself both on
the scientific level, by contributions to the field of psychology,
(and linguistics for that matter) and by contributions to the state of
the art in computer technology, and also in the engineering level, by
designing and building some very large programs and some new
programming techniques and tools.

  -Jim Hendler

------------------------------

Date: 19 Dec 1983 15:00-EST
From: Robert.Frederking@CMU-CS-CAD.ARPA
Subject: Re: Math as science

        Actually, my library's encyclopedia says that mathematics isn't
a science, since it doesn't study phenomena, but rather is "the
language of science".  Perhaps part of the fuzziness about
AI-as-science is that we are creating most of the phenomena we are
studying, and the more theoretical components of what we are doing look
a lot like mathematical logic, which isn't a science.

------------------------------

Date: Mon, 19 Dec 1983 10:21:47 EST
From: AXLER.Upenn-1100@Rand-Relay (David M. Axler - MSCF Applications Mgr.)
Subject: Defining "Science"

     For better or worse, there really isn't such a thing as a prototypical
science.  The meaning of the word 'science' has always been different in
different realms of discourse:  what the "average American" means by the term
differs from what a physicist means, and neither of them would agree with an
individual working in one of the 'softer' fields.
     This is not something we want to change, in my view.  The belief that
there must be one single standardized definition for a very general term is
not a useful one, especially when the term is one that does not describe a
explicit, material thing (e.g., blood, pencil, etc.).  Abstract terms are
always dependent on the social context of their use for their definition; it's
just that academics often forget (or fail to note) that contexts other than
their own fields exist.
     Even if we try and define science in terms of its usage of the "scientific
method," we find that there's no clear definition.  If you've yet to read it,
I strongly urge you to take a look at Kuhn's "The Structure of Scientific
Revolutions," which is one of the most important books written about science.
He looks at what the term has meant, and does mean, in various disciplines
at various periods, and examines very carefully how the definitions were, in
reality, tied to other socially-defined notions.  It's a seminal work in the
study of the history and sociology of science.
     The social connotations of words like science affect us all every day.
In my personal opinion, one of the major reasons why the term 'computer
science' is gaining popularity within academia is that it dissociates the
field from engineering.  The latter field has, at least within most Western
cultures, a social stigma of second-class status attached to it, precisely
because it deals with mundane reality (the same split, of course, comes up
twixt pure and applied mathematics).  A good book on this, by the way, is
Samuel Florman's "The Existential Pleasures of Engineering"; his more recent
volume, "Blaming Technology", is also worth your time.
--Dave Axler

------------------------------

Date: Fri 16 Dec 83 17:32:56-PST
From: Al Davis <ADavis at SRI-KL>
Subject: Re: AIList Digest V1 #113


In response to the general feeling that Gee the Japanese are good guys
and the Americans are schmucks and war mongers view, and as a member of
one of the planning groups that wrote the DARPA SC plan, I offer the
following questions for thought:

1.  If you were Bob Kahn and were trying to get funding to permit
continued growth of technology under the Reagan administration, would
you ask for $750 million and say that you would do things in such a
way as to prevent military use?

2.  If it were not for DARPA how would we be reading and writing all
this trivia on the ARPAnet?

3.  If it were not for DARPA how many years (hopefully fun, productive,
and challenging) would have been fundamentally different?

4.  Is it possible that the Japanese mean "Japanese society" when they
target programs for "the good of ?? society"?

5.  Is it really possible to develop advanced computing technology that
cannot be applied to military problems?  Can lessons of
destabilization of the US economy be learned from the automobile,
steel, and TV industries?

6.  It is obvious that the Japanese are quick to take, copy, etc. in
terms of technology and profit.  Have they given much back? Note:  I like
my Sony TV and Walkman as much as anybody does.

7.  If DARPA is evil then why don't we all move to Austin and join MCC
and promote good things like large corporate profit?

8.  Where would AI be if DARPA had not funded it?

Well the list could go on, but the direction of this diatribe is
clear.  I think that many of us (me too) are quick to criticize and
slow to look past the end of our noses.  One way to start to improve
society is to climb down off the &%↑$&↑ ivory tower ourselves.  I for
one have no great desire to live in Japan.

                                                Al Davis

                                                ADAVIS @ SRI-KL

------------------------------

Date: Tue, 20 Dec 1983  09:13 EST
From: HEWITT%MIT-OZ@MIT-MC.ARPA
Subject: New Generation computing: Japanese and U.S. motivations

Ron,

I believe that you have painted a misleading picture of a complex situation.

From talking to participants involved, I believe that MITI is
funding the Japanese Fifth Generation Project primarily for commercial
competitive advantage.  In particular they hope to compete with IBM
more effectively than as plug-compatible manufacturers.  MITI also
hopes to increase Japanese intellectual prestige.  Congress is funding
Strategic Computing to maintain and strengthen US military and
commercial technology.  A primary motivation for strengthening the
commercial technology is to meet the Japanese challenge.

------------------------------

Date: 20 Dec 83 20:41:06 PST (Tuesday)
From: Ron Newman <Newman.es@PARC-MAXC.ARPA>
Subject: Re: New Generation computing: Japanese and U.S. motivations

Are we really in disagreement?

It seems pretty clear from my quotes, and from numerous writings on the
subject, that the Japanese intend to use the Fifth Generation Project to
strengthen their position in commercial markets.  We don't disagree
there.

It also seems clear that, as you say, "Congress is funding a project
called Strategic Computing to maintain and strengthen US military and
commercial technology."  That should be parsed as "Military technology
first, with hopes of commercial spinoff."

If you think that's a misleading distortion, read the DARPA Strategic
Computing Report.  Pages 21 through 29 contain detailed specifications
of the requirements of three specific military applications.   There is
no equivalent specification of non-military application
requirements--only a vague statement on page 9 that commercial spinoffs
will occur.  Military requirements and terminology permeate the entire
report.

If the U.S. program is aimed at military applications, that's what it
will produce.  Any commercial or industrial spinoff will be incidental.
If we are serious about strengthening commercial computer technology,
then that's what we should be aiming for.  As you say, that's certainly
what the Japanese are aiming for.

Isn't it about time that we put our economic interests first, and the
military second?

/Ron

------------------------------

End of AIList Digest
********************

∂22-Dec-83  2213	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #116    
Received: from SRI-AI by SU-AI with TCP/SMTP; 22 Dec 83  22:12:58 PST
Date: Thu 22 Dec 1983 19:37-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #116
To: AIList@SRI-AI


AIList Digest            Friday, 23 Dec 1983      Volume 1 : Issue 116

Today's Topics:
  Optics - Request for Camera Design,
  Neurophysiology - Split Brain Research,
  Expert Systems - System Size,
  AI Funding - New Generation Computing,
  Science - Definition
----------------------------------------------------------------------

Date: Wed, 21 Dec 83 14:43:29 PST
From: Philip Kahn <v.kahn@UCLA-LOCUS>
Subject: REFERENCES FOR SPECIALIZED CAMERA DESIGN USING FIBER OPTICS

        In a conventional TV camera, the image falls upon a staring
array of transducers.  The problem is that it is very difficult to
get very close to the focal point of the optical system using this
technology.
        I am looking for a design of a camera imaging system
that projects the light image onto a fiber optic bundle.
The optical fibers are used to transport the light falling upon
each pixel  away from the camera focal point so that the light
may be quantitized.
        I'm sure that such a system has already been designed, and
I would greatly appreciate any references that would be appropriate
to this type of application.  I need to computer model such a system,
so the pertinent optical physics and related information would be
MOST useful.
        If there are any of you that might be interested in this
type of camera system, please contact me.  It promises to provide
the degree of resolution which is a constraint in many vision
computations.

                                Visually yours,
                                Philip Kahn

------------------------------

Date: Wed 21 Dec 83 11:38:36-PST
From: Richard F. Lyon <DLyon at SRI-KL>
Subject: Re: AIList Digest V1 #115

  In reply to <majka@ubc-vision.UUCP> on left/right brain research:

    Most of the work on split brains and hemispheric specialization
has been done at Caltech by Dr. Roger Sperry and colleagues.  The 1983
Caltech Biology annual report has 5 pages of summary results, and 11
recent references by Sperry's group.  Previous year annual reports
have similar amounts.  I will mail copies if given an address.
        Dick Lyon
        DLYON@SRI-KL

------------------------------

Date: Wednesday, 21 December 1983 13:48:54 EST
From: John.Laird@CMU-CS-H
Subject: Haunt and other production systems.

A few facts on productions systems.

1. Haunt consists of 1500 productions and requires 160K words of memory on a
KL10. (So Frumps is a bit bigger than Haunt.)

2. Expert systems (R1, XSEL and PTRANS) written in a similar language
consist of around 1500-2500 productions.

3. An expert system to perform VLSI design (DAA) consists of around 200
productions.

------------------------------

Date: 19 Dec 83 17:37:56-PST (Mon)
From: decvax!dartvax!lorien @ Ucb-Vax
Subject: Re: Humanistic Japanese vs. Military Americans
Article-I.D.: dartvax.536

Does anyone know of any groups doing serious AI in the U.S. or Europe
that emulate the Japanese attitude?

--Lorien

------------------------------

Date: Wed 21 Dec 83 13:04:21-PST
From: Andy Freeman <ANDY@SU-SCORE.ARPA>
Subject: Re: AIList Digest   V1 #115

"If the U.S. program is aimed at military applications, that's what it
will produce.  Any commercial or industrial spinoff will be
incidental."

It doesn't matter what DoD and the Japanese project aim for.  We're
not talking about a spending a million on designing bullets but
something much more like the space program.  The meat of that
specification was "American on Moon with TV camera" but look what else
happened.  Also, the goal was very low volume, but many of the
products aren't.

Hardware, which is probably the majority of the specification, could
be where the crossover will be greatest.  Even if they fail to get "a
lisp machine in every tank", they'll succeed in making one for an
emergency room.  (Camping gear is a recent example of something
similar.)  Yes, they'll be able to target software applications, but
at least the tools, skills, and people move.  What distinguishes a US
Army database system anyway?

I can understand the objection that the DoD shouldn't have "all those
cycles", but that isn't one of the choices.  (How they are to be used
is, but not through the research.)  The new machines are going to be
built - if nothing else the Dod can use Japanese ones.  Even if all
other things were equal, I don't think the economic ones are, why
should they have all the fun?

-andy

------------------------------

Date: Wednesday, 21 December 1983, 19:27-EST
From: Hewitt at MIT-MC
Subject: New Generation Computing: Japanese and U.S. motivations

Ron,

For better or worse, I do not believe that you can determine what will
be the motivations or structure of either the MITI Fifth Generation
effort or the DARPA Strategic Computing effort by citing chapter and
verse from the two reports which you have quoted.

/Carl

------------------------------

Date: Wed, 21 Dec 83 22:55:04 EST
From: BRINT <abc@brl-bmd>
Subject: AI Funding - New Generation Computing

It seems to me that intelligent folks like AIList readers
should realize that the only reason Japan can fund peaceful
and humanitarian research to the exclusion of
military projects is that the United States provides the
military protection and security guarantees (out of our own
pockets) that make this sort of thing possible.

(I believe Al Davis said it well in the last Digest.)

------------------------------

Date: 22 Dec 83 13:52:20 EST
From: STEINBERG@RUTGERS.ARPA
Subject: Strategic Computing: Defense vs Commerce

Yes, it is a sad fact about American society that a project like
Strategic Computing will only be funded if it is presented as a
defense issue rather than a commercial/economic one.  (How many people
remember that the original name for the Interstate Highway system had
the word "Defense" in it?)  This is something we can and
should work to change, but I do not believe that it is the kind of
thing that can be changed in a year or two.  So, we are faced with the
choice of waiting until we change society, or getting the AI work done
in a way that is not perfectly optimal for producing
commercial/economic results.

It should be noted that achieving the military goals will require very
large advances in the underlying technology that will certainly have
very large effects on non-military AI.  It is not just a vague hope
for a few spinoffs.  So while doing it the DOD way may not be optimal
it is not horrendously sub-optimal.

There is, of course, a moral issue of whether we want the military to
have the kinds of capabilities implied by the Strategic Computing
plan.  However, if the answer is no then you cannot do the work under
any funding source.  If the basic technology is achieved in any way,
then the military will manage to use it for their purposes.

------------------------------

Date: 18 Dec 83 19:47:50-PST (Sun)
From: pur-ee!uiucdcs!parsec!ctvax!uokvax!andree @ Ucb-Vax
Subject: Re: Information sciences vs. physical sc - (nf)
Article-I.D.: uiucdcs.4598

    The definitions of Science that were offered, in defense of
    "computer Science" being a science, were just irrelevant.
    A field can lay claim to Science, if it uses the "scientific method"
    to make advances, that is:

    Hypotheses are proposed.
    Hypotheses are tested by objective experiments.
    The experiments are objectively evaluated to prove or
            disprove the hypotheses.
    The experiments are repeatable by other people in other places.

                                    - Keremath,  care of:
                                      Robison
                                      decvax!ittvax!eosp1
                                      or:   allegra!eosp1


I have to disagree. Your definition of `science' excludes at least one
thing that almost certainly IS a science: astronomy. The major problem
here is that most astronomers (all extra-solar astronomers) just can not
do experiments. Which is why they call it `obervational astronomy.'

I would guess what is needed is three (at least) flavors of science:

        1) experimental sciences: physics, chemistry, biology, psychology.
        Any field that uses the `scientific method.'

        2) observational sciences: astronomy, sociology, etc. Any field that,
        for some reason or another, must be satisfied with observing
        phenomena, and cannot perform experiments.

        3) ? sciences: mathematics, some cs, probably others. Any field that
        explores the universe of the possible, as opposed to the universe of
        the actual.

What should the ? be? I don't know. I would tend to favor `logical,' but
something tells me a lot of people will object.

        <mike

------------------------------

Date: 21 Dec 1983 14:36-PST
From: fc%usc-cse%USC-ECL@SRI-NIC
Subject: Re: AIList Digest   V1 #115

        Th reference to Kuhn's 'The Structure of Scientific Revolutions'
is appreciated, but if you take a good look at the book itself, you
will find it severely lacking in scientific practice. Besides being
palpably inconsistent, Kuhn's book claims several facts about history
that are not correct, and uses them in support of his arguments. One of
his major arguments is that historians rewrite the facts, thus he acted
in this manner to rewrite facts to support his contentions. He defined
a term 'paradigm' inconsistently, and even though it is in common use
today, it has not been consistently defined yet. He also made several
other inconsistent definitions, and has even given up this view of
science (if you bother to read the other papers written after his book).

    It just goes to show you, you shouldn't believe everything you read,
                                        Fred

------------------------------

End of AIList Digest
********************

∂30-Dec-83  0322	LAWS@SRI-AI.ARPA 	AIList Digest   V1 #117    
Received: from SRI-AI by SU-AI with TCP/SMTP; 30 Dec 83  03:22:32 PST
Date: Thu 29 Dec 1983 23:42-PST
From: AIList Moderator Kenneth Laws <AIList-REQUEST@SRI-AI>
Reply-to: AIList@SRI-AI
US-Mail: SRI Int., 333 Ravenswood Ave., Menlo Park, CA  94025
Phone: (415) 859-6467
Subject: AIList Digest   V1 #117
To: AIList@SRI-AI


AIList Digest            Friday, 30 Dec 1983      Volume 1 : Issue 117

Today's Topics:
  Reply - Fiber Optic Camera,
  Looping Problem - Loop Detection and Classical Psychology,
  Logic Programming - Horn Clauses, Disjunction, and Negation,
  Alert - Expert Systems & Molecular Design,
  AI Funding - New Generation Discussion,
  Science - Definition
----------------------------------------------------------------------

Date: 23 Dec 1983 11:59-EST
From: David.Anderson@CMU-CS-G.ARPA
Subject: fiber optic camera?

The University of Pittsburgh Observatory is experimenting with just
such an imaging system in one of their major projects, trying to
(indirectly) observe planetary systems around nearby stars.  They claim
that the fiber optics provide so much more resolution than the
photography they used before that they may well succeed.  Another major
advantage to them is that they have been able to automate the search;
no more days spent staring at photographs.

--david

------------------------------

Date: Fri 23 Dec 83 12:01:07-EST
From: Michael Rubin <RUBIN@COLUMBIA-20.ARPA>
Subject: Loop detection and classical psychology

I wonder if we've been incorrectly thinking of the brain's loop detection
mechanism as a sort of monitor process sitting above a train of thought,
and deciding when the latter is stuck in a loop and how to get out of it.
This approach leads to the problem of who monitors the monitor, ad
infinitum.  Perhaps the brain detects loops in *hardware*, by classical
habituation.  If each neuron is responsible for one production (more or
less), then a neuron involved in a loop will receive the same inputs so
often that it will get tired of seeing those inputs and fire less
frequently (return a lower certainty value), breaking the loop.  The
detection of higher level loops such as "Why am I trying to get this PhD?"
implies that there is a hierarchy of little production systems (or
whatever), one for each chunk of knowledge.  [Next question - how are
chunks formed?  Maybe there's a low-level explanation for that too, having
to do with classical conditioning....]

BTW, I thought of this when I read some word or other so often that it
started looking funny; that phenomenon has gotta be a misfeature of loop
detection.  Some neuron in the dictionary decides it's been seeing that damn
word too often, so it makes its usual definition less certain; the parse
routine that called it gets an uncertain definition back and calls for
help.
                        --Mike Rubin <Rubin@Columbia-20>

------------------------------

Date: 27 Dec 1983 16:30:08-PST
From: marcel.uiuc@Rand-Relay
Subject: Re: a trivial reasoning problem?

This is an elaboration of why a problem I submitted to the AIList seems
to be unsolvable using regular Horn clause logic, as in Prolog. First I'll
present the problem (of my own devising), then my comments, for your critique.

        Suppose you are shown two lamps, 'a' and 'b', and you
        are told that, at any time,

                1. at least one of 'a' or 'b' is on.
                2. whenever 'a' is on, 'b' is off.
                3. each lamp is either on or off.

        WITHOUT using an exhaustive generate-and-test strategy,
        enumerate the possible on-off configurations of the two
        lamps.

If it were not for the exclusion of dumb-search-and-filter solutions, this
problem would be trivial. The exclusion has left me baffled, even though
the problem seems so logical. Check me on my thinking about why it's so
difficult.

1. The first constraint (one or both lamps on) is not regular Horn clause
   logic. I would like to be able to state (as a fact) that

        on(a) OR on(b)

   but since regular Horn clauses are restricted to at most one positive
   literal I have to recode this. I cannot assert two independent facts
   'on(a)', 'on(b)' since this suggests that 'a' and 'b' are always both
   on. I can however express it in regular Horn clause form:

        not on(b) IMPLIES on(a)
        not on(a) IMPLIES on(b)

   As it happens, both of these are logically equivalent to the original
   disjunction. So let's write them as Prolog:

        on(a) :- not on(b).
        on(b) :- not on(a).

   First, this is not what the disjunction meant. These rules say that 'a'
   is provably on only when 'b' is not provably on, and vice versa, when in
   fact 'a' could be on no matter what 'b' is.

   Second, a question   ?- on(X).  will result in an endless loop.

   Third, 'a' is not known to be on except when 'b' is not known to be on
   (which is not the same as when 'b' is known to be off). This sounds as
   if the closed-world assumption might let us get away with not being able
   to prove anything (if we can't prove something we can always assume its
   negation). Not so. We do not know ANYTHING about whether 'a' or 'b' are
   on OR off; we only know about constraints RELATING their states. Hence
   we cannot even describe their possible states, since that would require
   filling in (by speculative hypothesis) the states of the lamps.

   What is wanted is a non-regular Horn clause, but some of the nice
   properties of Logic Programming (eg completeness and consistency under the
   closed-world assumption, alias a reasonable negation operator) do not apply
   to non-regular Horn clauses.

2. The second constraint (whenever 'a' is on, 'b' is off) shares some of the
   above problems, and a new one. We want to say

        on(a) IMPLIES not on(b),   or    not on(b) :- on(a).

   but this is not possible in Prolog; we have to say it in what I feel to
   be a rather contrived manner, namely

        on(b) :- on(a), !, fail.

   Unfortunately this makes no sense at all to a theoretician. It is trying
   to introduce negative information, but under the closed-world assumption,
   saying that something is NOT true is just the same as not saying it at all,
   so the clause is meaningless.

   Alternative: define a new predicate off(X) which is complementary to on(X).
   That is the conceptualization suggested by the third problem constraint.

3.      off(X) :- not on(X).
        on(X)  :- not off(X).

   This idea has all the problems of the first constraint, including the
   creation of another endless loop.

It seems this problem is beyond the capabilities of present-day logic
programming. Please let me know if you can find a solution, or if you think
my analysis of the difficulties is inaccurate.

                                        Marcel Schoppers
                                        U of Illinois at Urbana-Champaign
                                        {pur-ee|ihnp4}!uiucdcs!marcel

------------------------------

Date: Mon 26 Dec 83 22:15:06-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Subject: High Technology Articles

The January issue of High Technology has a fairly good introduction
to expert systems for commercial applications.  As usual for this
magazine, there are corporate names and addresses and product
prices.  The article mentions that there are probably fewer than
200 "knowledge engineers" in the country, most at universities
and think tanks; an AI postdoc willing to go into industry, but
with no industry experience, can command $70K.

The business outlook section is not the usual advice column
for investors, just a list of some well-known AI companies.  The
article is also unusual in that it bases a few example of knowledge
representation and inference on the fragment BIRD IS-A MAMMAL.


Another interesting article is "Designing Molecules by Computer".
Several approaches are given, but one seems particularly pertinent
to the recent AIList discussion of military AI funding.  Du Pont
researchers are studying how a drug homes in on its receptor site.
They use an Army program that generates line-of-sight maps for
TV-controlled antitank missiles to "fly" a drug in and observe how its
ability to track its receptor site on the enzyme surface is influenced
by a variety of force fields and solvent interactions.  A different
simulation with a similar purpose uses robotic software for assembling
irregular components to "pick up" the drug and "insert" it in the
enzyme.

                                        -- Ken Laws

------------------------------

Date: 23 December 1983 21:41 est
From: Dehn at MIT-MULTICS (Joseph W. Dehn III)
Subject: "comparable" quotes

Person at University of Tokyo, editor of a scientific/engineering
journal, says computers will be used to solve human problems.

Person at DARPA says computers will be used to make better weapons
("ways of killing people").

Therefore, Japanese are humane, Americans are warmongers.

Huh?

What is somebody at DARPA supposed to say is the purpose of his R&D
program?  As part of the Defense Department, that agency's goal SHOULD
be to improve the defense of the United States.  If they are doing
something else, they are wasting the taxpayer's money.  There are
undoubtedly other considerations involved in DARPA's activities,
bureaucratic, economic, scientific, etc., but, nobody should be
astonished when an official statement of purpose states the official
purpose!

Assuming the nation should be defended, and assuming that advanced
computing can contribute to defense, it makes sense for the national
government to take an interest in advanced computing for defense.  Thus,
the question should not be, "why do Americans build computers to kill
people", but rather why don't they, like the Japanese, ALSO, and
independent of defense considerations (which are, as has been pointed
out, different in Japan), build computers " to produce profitable
industrial products"?

Of course, before we try to solve this puzzle, we should first decide
that there is something to be solved.  Is somebody suggesting that
because there are no government or quasi-government statements of
purpose, that Americans are not working on producing advanced and
profitable computer products?  What ARE all those non-ARPA people doing
out there in netland, anyway?  Where are IBM's profits coming from?

How can we meaningfully compare the "effort" being put into computer
research in Japan and the U.S.?  Money? People?  How about results?
Which country has produced more working AI systems (you pick the
definition of "working" and "AI")?

                           -jwd3

------------------------------

Date: 29 Dec 1983 09:11:34-PST
From: Mike Brzustowicz <mab@aids-unix>
Subject: Japan again.

Just one more note.  Not only do we supply Japan's defense, but by treaty
they cannot supply their own (except for a very small national guard-type
force).

------------------------------

Date: 21 Dec 83 19:49:32-PST (Wed)
From: harpo!eagle!mhuxl!ulysses!princeton!eosp1!robison @ Ucb-Vax
Subject: Re: Information sciences vs. physical sc - (nf)
Article-I.D.: eosp1.466

I disagree -  astronomy IS an experimental science.  Even before the
age of space rockets, some celebrated astronomical experiments have
been performed.  In astronomy, as in all sciences, one observes,
makes hypotheses, and then tries to verify the hypotheses by
observation.  In chemistry and physics, a lot of attention is paid
to setting up an experiment, as well as observing the experiment;
in astronomy (geology as well!), experiments consist mostly
of observation, since there is hardly anything that people are capable
of setting up.  Here are some pertinent examples:

(1) An experiment to test a theory about the composition of the sun has
been going on for several years.  It consists of an attempt to trap
neutrinos from the sun in a pool of chlorine underground.  The amount
of neutrinos detected has been about 1/4 of what was predicted, leading
to new suggestions about both the composition of the sun,
and (in particle physics) the physical properties of neutrinos.

(2) An experiment to verify Einstein's theory of relativity,
particularly the hypothesis that the presence of large masses curves
space (gravitational relativity) -- Measurements of Mercury's apparent
position, during an eclipse of the sun, were in error to a degree
consistent with Einstein's theory.

Obviously, Astronomical experiments will seem to lie half in the realm
of physics, since the theories of physics are the tools with which we
try to understand the skies.

Astronomers and physicists, please help me out here; I'm neither.
In fact, I don't even believe in neutrinos.

                                - Keremath,  care of:
                                  Robison
                                  decvax!ittvax!eosp1
                                  or:   allegra!eosp1

------------------------------

Date: Thu, 29 Dec 83 15:44 EST
From: Hengst.WBST@PARC-MAXC.ARPA
Subject: Re: AIList Digest   V1 #116

The flaming on the science component of computer science intrigues me
because it parallels some of the 1960's and 1970's discussion about the
science component of social science. That particular discussion, to
which Thomas Kuhn also contributed, also has not yet reached closure
which leaves me with the feeling that science might best be described as
a particular form of behavior by practitioners who possess certain
qualifications and engage in certain rituals approved by members of the
scientific tribe.

Thus, one definition of science is that it is whatever it is that
scientists do in the name of science ( a contextual and social
definition). Making coffee would not be scientific activity but reading
a professional book or entertaining colleagues with stimulating thoughts
and writings would be. From this perspective, employing the scientific
method is merely a particular form of engaging in scientific practice
without judging the outcome of that scientific practice. Relying upon
the scientific method by unlicensed practitioners would not result in
science but in lay knowledge. This means that authoritative statements
by members of scientific community are automatically given a certain
truth value. "Professor X says this", "scientific study Y demonstrates
that . . ." should all be considered as scientific statements because
they are issued as authorative statements in the name of science. This
interpretation of science discounts the role of Edward Teller as a
credible spokesman in the area of nuclear weapons policy in foreign
affairs.

The "licensing" of the practitioners derives from the formalization of
the training and education in the particular body of knowledge: eg. a
university degree is a form of license. Scientific knowledge can
differentiate itself from other forms of knowledge on the basis of
attempts (but not necesssarily success) at formalization. Physical
sciences study phenomena which lend themselves to better quantification
(they do have better metrics!) and higher levels of formalization. The
deterministic bodies of knowledge of the physical science allow for
better prediction than the heavily probabilistic bodies of knowledge of
the social science which facilitate explanation more so than prediction.
I am not sure if a lack of predictive power or lack of availability of
the scientific method (experimental design in its many flavors) makes
anyone less a scientist. The social sciences are rich in description and
insight which in my judgment compensates for a lack of hierarchical,
deductive formal knowledge.

From this point of view computer science is science if it involves
building a body of knowledge with attempts at formulating rules in some
consistent and verfiable manner by a body of trained practitioners.
Medieval alchemy also qualifies due to its apprenticeship program (rules
for admitting members) and its rules for building knowledge.
Fortunately, we have better rules now.

Acco

------------------------------

Date: Thu 29 Dec 83 23:38:18-PST
From: Ken Laws <Laws@SRI-AI.ARPA>
Reply-to: AIList-Request@SRI-AI
Subject: Philosophy of Science Discussion

I hate to put a damper on the discussion of Scientific Method,
but feel it is my duty as moderator.  The discussion has been
intelligent and entertaining, but has strayed from the central
theme of this list.  I welcome discussion of appropriate research
techniques for AI, but discussion of the definition and philosophy
of science should be directed to Phil-Sci@MIT-OZ.  (Net.ai members
are free to discuss whatever they wish, of course, but I will
not pass further messages on this topic to the ARPANET readership.)

                                        -- Ken Laws

------------------------------

End of AIList Digest
********************